Feb 16 20:56:20 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 20:56:20 crc restorecon[4682]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:20 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 20:56:21 crc restorecon[4682]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 20:56:23 crc kubenswrapper[4805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:56:23 crc kubenswrapper[4805]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 20:56:23 crc kubenswrapper[4805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:56:23 crc kubenswrapper[4805]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:56:23 crc kubenswrapper[4805]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 20:56:23 crc kubenswrapper[4805]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.319001 4805 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325306 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325329 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325335 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325341 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325383 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325463 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325473 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325481 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325491 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325500 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325510 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325519 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325528 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325535 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325542 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325550 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325557 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325564 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325571 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325586 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325594 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325602 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325608 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325616 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325622 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325629 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325637 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325644 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325650 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325659 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325668 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325675 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325683 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325690 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325697 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325703 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325709 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325739 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325747 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325754 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325761 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325768 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325775 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325782 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325788 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325795 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325803 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325810 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325821 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325829 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325836 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325845 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325851 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325859 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325866 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325873 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325879 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325886 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325893 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325900 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325907 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325914 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325921 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325928 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325934 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325941 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325948 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325953 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325959 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325977 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.325983 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326217 4805 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326251 4805 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326268 4805 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326277 4805 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326286 4805 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326293 4805 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326302 4805 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326311 4805 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326320 4805 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326328 4805 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326337 4805 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326385 4805 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326395 4805 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326405 4805 flags.go:64] FLAG: --cgroup-root="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326412 4805 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326420 4805 flags.go:64] FLAG: --client-ca-file="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326428 4805 flags.go:64] FLAG: --cloud-config="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326435 4805 flags.go:64] FLAG: --cloud-provider="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326443 4805 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326454 4805 flags.go:64] FLAG: --cluster-domain="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326462 4805 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326470 4805 flags.go:64] FLAG: --config-dir="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326477 4805 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326487 4805 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326534 4805 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326544 4805 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326552 4805 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326561 4805 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326569 4805 flags.go:64] FLAG: --contention-profiling="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326578 4805 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326586 4805 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326594 4805 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326601 4805 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326611 4805 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326710 4805 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326745 4805 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326754 4805 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326764 4805 flags.go:64] FLAG: --enable-server="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326772 4805 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326785 4805 flags.go:64] FLAG: --event-burst="100" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326793 4805 flags.go:64] FLAG: --event-qps="50" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326801 4805 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326808 4805 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326815 4805 flags.go:64] FLAG: --eviction-hard="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326828 4805 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326838 4805 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326847 4805 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326854 4805 flags.go:64] FLAG: --eviction-soft="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326861 4805 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326868 4805 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326875 4805 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326883 4805 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326891 4805 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326899 4805 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326906 4805 flags.go:64] FLAG: --feature-gates="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326917 4805 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326925 4805 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326934 4805 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326942 4805 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326950 4805 flags.go:64] FLAG: --healthz-port="10248" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326958 4805 flags.go:64] FLAG: --help="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326966 4805 flags.go:64] FLAG: --hostname-override="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326973 4805 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326981 4805 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326989 4805 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.326997 4805 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327004 4805 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327011 4805 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327019 4805 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327026 4805 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327033 4805 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327040 4805 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327048 4805 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327057 4805 flags.go:64] FLAG: --kube-reserved="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327065 4805 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327072 4805 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327082 4805 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327091 4805 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327099 4805 flags.go:64] FLAG: --lock-file="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327107 4805 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327115 4805 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327124 4805 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327136 4805 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327143 4805 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327151 4805 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327159 4805 flags.go:64] FLAG: --logging-format="text" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327167 4805 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327174 4805 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327181 4805 flags.go:64] FLAG: --manifest-url="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327187 4805 flags.go:64] FLAG: --manifest-url-header="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327195 4805 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327202 4805 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327210 4805 flags.go:64] FLAG: --max-pods="110" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327216 4805 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327222 4805 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327228 4805 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327235 4805 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327242 4805 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327248 4805 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327254 4805 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327269 4805 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327276 4805 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327283 4805 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327289 4805 flags.go:64] FLAG: --pod-cidr="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327295 4805 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327305 4805 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327325 4805 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327332 4805 flags.go:64] FLAG: --pods-per-core="0" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327338 4805 flags.go:64] FLAG: --port="10250" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327345 4805 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327352 4805 flags.go:64] FLAG: --provider-id="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327358 4805 flags.go:64] FLAG: --qos-reserved="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327364 4805 flags.go:64] FLAG: --read-only-port="10255" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327371 4805 flags.go:64] FLAG: --register-node="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327377 4805 flags.go:64] FLAG: --register-schedulable="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327383 4805 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327394 4805 flags.go:64] FLAG: --registry-burst="10" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327437 4805 flags.go:64] FLAG: --registry-qps="5" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327443 4805 flags.go:64] FLAG: --reserved-cpus="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327450 4805 flags.go:64] FLAG: --reserved-memory="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327461 4805 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327468 4805 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327478 4805 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327485 4805 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327493 4805 flags.go:64] FLAG: --runonce="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327501 4805 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327509 4805 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327518 4805 flags.go:64] FLAG: --seccomp-default="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327525 4805 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327531 4805 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327538 4805 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327544 4805 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327551 4805 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327557 4805 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327564 4805 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327571 4805 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327579 4805 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327587 4805 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327595 4805 flags.go:64] FLAG: --system-cgroups="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327602 4805 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327629 4805 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327638 4805 flags.go:64] FLAG: --tls-cert-file="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327645 4805 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327656 4805 flags.go:64] FLAG: --tls-min-version="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327663 4805 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327672 4805 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327679 4805 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327686 4805 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327695 4805 flags.go:64] FLAG: --v="2" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327712 4805 flags.go:64] FLAG: --version="false" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327745 4805 flags.go:64] FLAG: --vmodule="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327755 4805 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.327806 4805 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328050 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328060 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328066 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328072 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328078 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328084 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328090 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328096 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328103 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328109 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328115 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328121 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328126 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328132 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328137 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328142 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328147 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328152 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328158 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328163 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328169 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328174 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328179 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328184 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328189 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328195 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328201 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328206 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328221 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328227 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328233 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328239 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328246 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328253 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328260 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328266 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328272 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328278 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328296 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328301 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328307 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328312 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328318 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328324 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328329 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328334 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328339 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328345 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328350 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328356 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328361 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328366 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328373 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328380 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328385 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328391 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328396 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328402 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328408 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328413 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328418 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328423 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328428 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328435 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328441 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328447 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328452 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328457 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328462 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328469 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.328478 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.328495 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.347205 4805 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.347258 4805 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347338 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347347 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347351 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347356 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347360 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347364 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347369 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347374 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347380 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347386 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347391 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347395 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347400 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347404 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347408 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347412 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347416 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347419 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347423 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347427 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347431 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347434 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347437 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347441 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347445 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347448 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347452 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347455 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347458 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347462 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347465 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347469 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347472 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347476 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347481 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347485 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347488 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347492 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347495 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347498 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347503 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347506 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347510 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347513 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347517 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347520 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347524 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347527 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347531 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347534 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347538 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347541 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347545 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347548 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347552 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347556 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347559 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347563 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347566 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347570 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347574 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347579 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347584 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347590 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347594 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347598 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347602 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347608 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347613 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347617 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347624 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.347633 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347799 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347808 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347813 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347817 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347821 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347825 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347828 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347832 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347837 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347844 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347848 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347852 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347855 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347859 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347862 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347866 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347869 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347873 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347877 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347880 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347884 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347887 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347891 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347895 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347898 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347902 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347907 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347911 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347914 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347918 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347922 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347926 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347930 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347935 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347939 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347943 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347947 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347950 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347954 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347957 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347961 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347965 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347968 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347972 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347975 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347978 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347982 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347986 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347990 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347994 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.347998 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348002 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348005 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348009 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348012 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348016 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348019 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348023 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348027 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348031 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348036 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348040 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348044 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348049 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348053 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348057 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348061 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348065 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348068 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348072 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.348076 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.348082 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.348311 4805 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.354639 4805 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.354776 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.356367 4805 server.go:997] "Starting client certificate rotation" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.356399 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.357487 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 14:46:26.872347447 +0000 UTC Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.357645 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.392893 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.403044 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.418579 4805 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.434879 4805 log.go:25] "Validated CRI v1 runtime API" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.467497 4805 log.go:25] "Validated CRI v1 image API" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.469363 4805 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.474372 4805 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-20-52-05-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.474407 4805 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.489246 4805 manager.go:217] Machine: {Timestamp:2026-02-16 20:56:23.486895637 +0000 UTC m=+1.305578942 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f0e28e09-8311-445d-80ef-c735d31fd21e BootID:96338809-94a9-435f-a493-fbf04d8ca44c Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:90:a3:bd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:90:a3:bd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:dd:43:f1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:84:f9:2a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d4:02:f5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e3:e7:56 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:d5:4c:71:50:54 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:6e:41:2d:69:b1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.489533 4805 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.489652 4805 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.490024 4805 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.490213 4805 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.490247 4805 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.490485 4805 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.490496 4805 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.490945 4805 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.490977 4805 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.491111 4805 state_mem.go:36] "Initialized new in-memory state store" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.491197 4805 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.494457 4805 kubelet.go:418] "Attempting to sync node with API server" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.494525 4805 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.494554 4805 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.494599 4805 kubelet.go:324] "Adding apiserver pod source" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.494614 4805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.499188 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.499249 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.506064 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.506108 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.507324 4805 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.508325 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.510862 4805 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516730 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516764 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516775 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516784 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516795 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516803 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516811 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516828 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516836 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516844 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516856 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.516863 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.518557 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.528292 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.528668 4805 server.go:1280] "Started kubelet" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.529812 4805 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.529798 4805 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.530968 4805 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 20:56:23 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.533579 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.533630 4805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.533781 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:48:20.095444982 +0000 UTC Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.533890 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.540274 4805 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.540308 4805 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.540532 4805 server.go:460] "Adding debug handlers to kubelet server" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.540861 4805 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.541146 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.541253 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.541391 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="200ms" Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.546128 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894d593a29d962e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 20:56:23.528592942 +0000 UTC m=+1.347276237,LastTimestamp:2026-02-16 20:56:23.528592942 +0000 UTC m=+1.347276237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.550672 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.550874 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.550978 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.551153 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.558915 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.558963 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.558979 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.558996 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559018 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559032 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559046 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559061 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559076 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559096 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559112 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559129 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559145 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559158 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559171 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559185 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559200 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559222 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559235 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559250 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559263 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559276 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559293 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559308 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559323 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.558826 4805 factory.go:55] Registering systemd factory Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559398 4805 factory.go:221] Registration of the systemd container factory successfully Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559340 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559475 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559502 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559528 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559545 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559851 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559872 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559887 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559902 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559916 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559932 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559946 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559961 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559974 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.559990 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560006 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560018 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560028 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560041 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560053 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560067 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560079 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560091 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560127 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560139 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560152 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560163 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560177 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560189 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560203 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560216 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560227 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560239 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560250 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560263 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560275 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560287 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560301 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560314 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560325 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560338 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560349 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560361 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560381 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560392 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560404 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560416 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560428 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560440 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560453 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560466 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560479 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560500 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560514 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560526 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560540 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560552 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560563 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560579 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560593 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560607 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560620 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560633 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560645 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560657 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560670 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560683 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560695 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560706 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560733 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560746 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560758 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560769 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560781 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560793 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560811 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560827 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560842 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560856 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560869 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560883 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560895 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560906 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560918 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560931 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560945 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.560957 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561007 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561043 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561054 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561067 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561081 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561093 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561105 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561119 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561132 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561144 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561156 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561170 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561184 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561196 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561209 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561222 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561234 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561247 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561258 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561271 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561284 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561297 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561309 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561322 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561335 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561345 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561359 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.561372 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563669 4805 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563697 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563711 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563745 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563761 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563772 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563784 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563797 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563809 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563821 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563831 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563842 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563854 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563867 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563878 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563898 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563909 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563920 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563930 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563941 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563951 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563962 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563976 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563989 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.563999 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564010 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564022 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564033 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564045 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564055 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564067 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564078 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564090 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564102 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564113 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564125 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564141 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564154 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564167 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564182 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564196 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564210 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564225 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564239 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564252 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564263 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564275 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564280 4805 factory.go:153] Registering CRI-O factory Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564289 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564303 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564317 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564345 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564358 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564369 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564382 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564396 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564306 4805 factory.go:221] Registration of the crio container factory successfully Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564409 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564422 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564433 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564447 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564458 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564469 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564480 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564493 4805 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564528 4805 factory.go:103] Registering Raw factory Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564547 4805 manager.go:1196] Started watching for new ooms in manager Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564494 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564665 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564686 4805 reconstruct.go:97] "Volume reconstruction finished" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.564695 4805 reconciler.go:26] "Reconciler: start to sync state" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.565319 4805 manager.go:319] Starting recovery of all containers Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.588549 4805 manager.go:324] Recovery completed Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.594343 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.596358 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.596479 4805 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.596543 4805 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.596758 4805 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 20:56:23 crc kubenswrapper[4805]: W0216 20:56:23.597309 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.597374 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.602181 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.604418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.604455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.604465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.605670 4805 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.605688 4805 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.605711 4805 state_mem.go:36] "Initialized new in-memory state store" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.620894 4805 policy_none.go:49] "None policy: Start" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.621991 4805 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.622023 4805 state_mem.go:35] "Initializing new in-memory state store" Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.634644 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.690314 4805 manager.go:334] "Starting Device Plugin manager" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.690386 4805 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.690402 4805 server.go:79] "Starting device plugin registration server" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.690930 4805 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.690950 4805 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.691175 4805 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.691266 4805 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.691278 4805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.696926 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.697062 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.698160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.698216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.698232 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.698485 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.698686 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.698744 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.699371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.699399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.699410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.699508 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.699657 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.699698 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.699861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.699888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.699898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.700869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.700893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.700903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.700876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.700936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.700969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.701045 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.701087 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.701209 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.701244 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.701704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.701758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.701778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.701923 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.702016 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.702055 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704458 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704493 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.704998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.705015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.708341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.708394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.708405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.742499 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="400ms" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767023 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767106 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767136 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767187 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767338 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767369 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767424 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767445 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767521 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.767696 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.792146 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.794006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.794055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.794069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.794290 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:23 crc kubenswrapper[4805]: E0216 20:56:23.795207 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.868982 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869065 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869103 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869136 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869168 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869204 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869258 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869273 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869437 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869466 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869519 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869555 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869529 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869591 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869584 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869652 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869702 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869742 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.869795 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:23 crc kubenswrapper[4805]: I0216 20:56:23.998871 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.000781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.000858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.000867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.000895 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:24 crc kubenswrapper[4805]: E0216 20:56:24.001483 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.046215 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.074587 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.095651 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:24 crc kubenswrapper[4805]: W0216 20:56:24.102569 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-261970cf3ee31772d10fb442483131cef442442dd4dc167adb61c008595a9d0a WatchSource:0}: Error finding container 261970cf3ee31772d10fb442483131cef442442dd4dc167adb61c008595a9d0a: Status 404 returned error can't find the container with id 261970cf3ee31772d10fb442483131cef442442dd4dc167adb61c008595a9d0a Feb 16 20:56:24 crc kubenswrapper[4805]: W0216 20:56:24.116508 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d0fbb0e1aeb8d5e8419956009e3c0e0c6f8cc30c9a6a2b6b095b1a19db4009bb WatchSource:0}: Error finding container d0fbb0e1aeb8d5e8419956009e3c0e0c6f8cc30c9a6a2b6b095b1a19db4009bb: Status 404 returned error can't find the container with id d0fbb0e1aeb8d5e8419956009e3c0e0c6f8cc30c9a6a2b6b095b1a19db4009bb Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.119856 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 20:56:24 crc kubenswrapper[4805]: W0216 20:56:24.125175 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-952494c0c8aadd7063bce6977891bc14d66dcea751d2e71aa3d8454338415bbd WatchSource:0}: Error finding container 952494c0c8aadd7063bce6977891bc14d66dcea751d2e71aa3d8454338415bbd: Status 404 returned error can't find the container with id 952494c0c8aadd7063bce6977891bc14d66dcea751d2e71aa3d8454338415bbd Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.127954 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 20:56:24 crc kubenswrapper[4805]: E0216 20:56:24.144136 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="800ms" Feb 16 20:56:24 crc kubenswrapper[4805]: W0216 20:56:24.146331 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-11ee301587dedae80352848390d3f51cadf8e713062c5cb20ed1e54e285cf5e6 WatchSource:0}: Error finding container 11ee301587dedae80352848390d3f51cadf8e713062c5cb20ed1e54e285cf5e6: Status 404 returned error can't find the container with id 11ee301587dedae80352848390d3f51cadf8e713062c5cb20ed1e54e285cf5e6 Feb 16 20:56:24 crc kubenswrapper[4805]: W0216 20:56:24.158868 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-001f1c86c952da469a4221c03d3d43580d7e430c7439ec9d5f7efbe3e68b4e7e WatchSource:0}: Error finding container 001f1c86c952da469a4221c03d3d43580d7e430c7439ec9d5f7efbe3e68b4e7e: Status 404 returned error can't find the container with id 001f1c86c952da469a4221c03d3d43580d7e430c7439ec9d5f7efbe3e68b4e7e Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.402006 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.403413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.403441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.403451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.403477 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:24 crc kubenswrapper[4805]: E0216 20:56:24.404154 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 16 20:56:24 crc kubenswrapper[4805]: W0216 20:56:24.496697 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:24 crc kubenswrapper[4805]: E0216 20:56:24.496825 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.530316 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.534352 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:35:02.458527268 +0000 UTC Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.602967 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"952494c0c8aadd7063bce6977891bc14d66dcea751d2e71aa3d8454338415bbd"} Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.604074 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d0fbb0e1aeb8d5e8419956009e3c0e0c6f8cc30c9a6a2b6b095b1a19db4009bb"} Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.605363 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"261970cf3ee31772d10fb442483131cef442442dd4dc167adb61c008595a9d0a"} Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.606712 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"001f1c86c952da469a4221c03d3d43580d7e430c7439ec9d5f7efbe3e68b4e7e"} Feb 16 20:56:24 crc kubenswrapper[4805]: I0216 20:56:24.608057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"11ee301587dedae80352848390d3f51cadf8e713062c5cb20ed1e54e285cf5e6"} Feb 16 20:56:24 crc kubenswrapper[4805]: W0216 20:56:24.724248 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:24 crc kubenswrapper[4805]: E0216 20:56:24.724364 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:24 crc kubenswrapper[4805]: W0216 20:56:24.759977 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:24 crc kubenswrapper[4805]: E0216 20:56:24.760087 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:24 crc kubenswrapper[4805]: W0216 20:56:24.893252 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:24 crc kubenswrapper[4805]: E0216 20:56:24.893369 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:24 crc kubenswrapper[4805]: E0216 20:56:24.945542 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="1.6s" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.205048 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.206427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.206480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.206496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.206536 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:25 crc kubenswrapper[4805]: E0216 20:56:25.207185 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.425372 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:56:25 crc kubenswrapper[4805]: E0216 20:56:25.427558 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.530303 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.535302 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:56:55.163417259 +0000 UTC Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.618284 4805 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd" exitCode=0 Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.618402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd"} Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.618503 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.620685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.620745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.620755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.620768 4805 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5" exitCode=0 Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.620865 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5"} Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.620885 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.622144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.622309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.622336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.622571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05"} Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.624954 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef" exitCode=0 Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.625013 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef"} Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.625107 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.626343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.626383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.626408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.627256 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="747855793b94d31ad6df993dfdc8d090d1d78a4e098f19815efe74dfb5ee2fc5" exitCode=0 Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.627284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"747855793b94d31ad6df993dfdc8d090d1d78a4e098f19815efe74dfb5ee2fc5"} Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.627412 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.628964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.629003 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.629017 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.629047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.629938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.629970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:25 crc kubenswrapper[4805]: I0216 20:56:25.630077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4805]: W0216 20:56:26.186502 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:26 crc kubenswrapper[4805]: E0216 20:56:26.186577 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:26 crc kubenswrapper[4805]: E0216 20:56:26.458272 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894d593a29d962e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 20:56:23.528592942 +0000 UTC m=+1.347276237,LastTimestamp:2026-02-16 20:56:23.528592942 +0000 UTC m=+1.347276237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.529695 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.536055 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:38:13.657480617 +0000 UTC Feb 16 20:56:26 crc kubenswrapper[4805]: E0216 20:56:26.546417 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="3.2s" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.632194 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b29cf8a48a71fc22b69ebac39729e3c88f18302830bdeea48e3a205445f4e3fa" exitCode=0 Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.632467 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b29cf8a48a71fc22b69ebac39729e3c88f18302830bdeea48e3a205445f4e3fa"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.632520 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.633770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.633822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.633835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.635693 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.635700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a046299ba36811947fdf82ca53f38854b76e08fd42cd0c4687988c59a2a286f0"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.636697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.636759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.636774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.639423 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.639486 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.639504 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.639449 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.640535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.640579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.640599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.642162 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.642191 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.642205 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.642219 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.643223 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.643255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.643265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.647025 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c0c04aedaf994ef8b11bc9b28d10106277fa38ee801f4df07a8d812ca6f0320c"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.647050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.647070 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.647082 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.647092 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172"} Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.647143 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.648314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.648354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.648364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.807734 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.809533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.809583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.809594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:26 crc kubenswrapper[4805]: I0216 20:56:26.809624 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:26 crc kubenswrapper[4805]: E0216 20:56:26.810330 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Feb 16 20:56:26 crc kubenswrapper[4805]: W0216 20:56:26.876145 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:26 crc kubenswrapper[4805]: E0216 20:56:26.876249 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:27 crc kubenswrapper[4805]: W0216 20:56:27.516149 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:27 crc kubenswrapper[4805]: E0216 20:56:27.516321 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.530044 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.537180 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:41:51.838665221 +0000 UTC Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.653014 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="90cd1fbfe7718c18e687807b67a33a53a010cdc05ed1829e40a782ed87806efa" exitCode=0 Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.653117 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"90cd1fbfe7718c18e687807b67a33a53a010cdc05ed1829e40a782ed87806efa"} Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.653225 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.653294 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.653324 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.653358 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.653366 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.653305 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.653547 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.655127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.655183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.655383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.655960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656102 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.656418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:27 crc kubenswrapper[4805]: I0216 20:56:27.667073 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.039550 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.481215 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.488627 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.537441 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:26:00.631969508 +0000 UTC Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.664331 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ba49b81323a328998f3c70d62f258b89d25d67d68b76c1d59c9c9c7e8784d2c1"} Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.664420 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.664419 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.664572 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.664606 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.664425 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e62611b5fa8b5ee1cc1f520f6b97c2c9c32d60cc44847bf58bab1eb3f447cf1e"} Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.664761 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"44b333e5f60bea84b564884d8afbf71989e3b6b5b64bc491fad6212f461adf98"} Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.665942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.665966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.665975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.666090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.666114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.666126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.666604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.666637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:28 crc kubenswrapper[4805]: I0216 20:56:28.666659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.538352 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:33:58.77483073 +0000 UTC Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.595426 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.606817 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.671842 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.671844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0bac8d9376efe35770ba745baf3a82efba3507cef19d77159dcf070473930c7d"} Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.671927 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.671950 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0de7eea820cfecab4f2ea196e85886b972d10b5163391dafc634968f2909ca42"} Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.671984 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.673026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.673091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.673119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.674162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.674224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.674243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:29 crc kubenswrapper[4805]: I0216 20:56:29.743416 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.010769 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.012390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.012419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.012431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.012455 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.263961 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.264135 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.264177 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.265430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.265491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.265512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.446386 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.516033 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.539183 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:23:21.403402896 +0000 UTC Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.673810 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.673823 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.673966 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.675821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.675850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.675861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.675897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.675950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.675961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.675842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.676100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:30 crc kubenswrapper[4805]: I0216 20:56:30.676111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:31 crc kubenswrapper[4805]: I0216 20:56:31.540286 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:06:14.565570355 +0000 UTC Feb 16 20:56:31 crc kubenswrapper[4805]: I0216 20:56:31.677008 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:31 crc kubenswrapper[4805]: I0216 20:56:31.678068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:31 crc kubenswrapper[4805]: I0216 20:56:31.678119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:31 crc kubenswrapper[4805]: I0216 20:56:31.678129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:32 crc kubenswrapper[4805]: I0216 20:56:32.540698 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:42:17.438644282 +0000 UTC Feb 16 20:56:32 crc kubenswrapper[4805]: I0216 20:56:32.596847 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:56:32 crc kubenswrapper[4805]: I0216 20:56:32.597018 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 20:56:33 crc kubenswrapper[4805]: I0216 20:56:33.146331 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 20:56:33 crc kubenswrapper[4805]: I0216 20:56:33.146599 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:33 crc kubenswrapper[4805]: I0216 20:56:33.148338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:33 crc kubenswrapper[4805]: I0216 20:56:33.148382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:33 crc kubenswrapper[4805]: I0216 20:56:33.148398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:33 crc kubenswrapper[4805]: I0216 20:56:33.541323 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:53:31.751952249 +0000 UTC Feb 16 20:56:33 crc kubenswrapper[4805]: E0216 20:56:33.701159 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 20:56:34 crc kubenswrapper[4805]: I0216 20:56:34.541641 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:31:06.460462046 +0000 UTC Feb 16 20:56:35 crc kubenswrapper[4805]: I0216 20:56:35.541752 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:16:35.146424139 +0000 UTC Feb 16 20:56:36 crc kubenswrapper[4805]: I0216 20:56:36.542596 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:00:11.520490954 +0000 UTC Feb 16 20:56:37 crc kubenswrapper[4805]: I0216 20:56:37.544229 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:51:23.042673168 +0000 UTC Feb 16 20:56:37 crc kubenswrapper[4805]: I0216 20:56:37.615736 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51372->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 20:56:37 crc kubenswrapper[4805]: I0216 20:56:37.616210 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51372->192.168.126.11:17697: read: connection reset by peer" Feb 16 20:56:37 crc kubenswrapper[4805]: W0216 20:56:37.717221 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 20:56:37 crc kubenswrapper[4805]: I0216 20:56:37.717620 4805 trace.go:236] Trace[1132859914]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 20:56:27.715) (total time: 10002ms): Feb 16 20:56:37 crc kubenswrapper[4805]: Trace[1132859914]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:56:37.717) Feb 16 20:56:37 crc kubenswrapper[4805]: Trace[1132859914]: [10.002014219s] [10.002014219s] END Feb 16 20:56:37 crc kubenswrapper[4805]: E0216 20:56:37.717643 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.401384 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.401615 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.403250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.403302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.403315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.419680 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.419787 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.426368 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.426466 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.472592 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.545859 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:53:42.459590538 +0000 UTC Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.698762 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.700968 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c0c04aedaf994ef8b11bc9b28d10106277fa38ee801f4df07a8d812ca6f0320c" exitCode=255 Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.701065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c0c04aedaf994ef8b11bc9b28d10106277fa38ee801f4df07a8d812ca6f0320c"} Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.701174 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.701289 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.702328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.702365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.702386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.702398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.702386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.702440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.702925 4805 scope.go:117] "RemoveContainer" containerID="c0c04aedaf994ef8b11bc9b28d10106277fa38ee801f4df07a8d812ca6f0320c" Feb 16 20:56:38 crc kubenswrapper[4805]: I0216 20:56:38.715043 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.546919 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:56:21.758080408 +0000 UTC Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.706891 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.708983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a"} Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.709065 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.709247 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.710066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.710100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.710115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.710333 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.710394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:39 crc kubenswrapper[4805]: I0216 20:56:39.710415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.273656 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.446855 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.523903 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.524185 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.525746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.525800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.525817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.547235 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:21:36.626423641 +0000 UTC Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.712337 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.714301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.714368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.714381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:40 crc kubenswrapper[4805]: I0216 20:56:40.720338 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:41 crc kubenswrapper[4805]: I0216 20:56:41.548119 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:24:14.488114214 +0000 UTC Feb 16 20:56:41 crc kubenswrapper[4805]: I0216 20:56:41.714692 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:41 crc kubenswrapper[4805]: I0216 20:56:41.723752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:41 crc kubenswrapper[4805]: I0216 20:56:41.723797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:41 crc kubenswrapper[4805]: I0216 20:56:41.723841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:42 crc kubenswrapper[4805]: I0216 20:56:42.548906 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:39:42.821513537 +0000 UTC Feb 16 20:56:42 crc kubenswrapper[4805]: I0216 20:56:42.596281 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:56:42 crc kubenswrapper[4805]: I0216 20:56:42.596396 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:56:42 crc kubenswrapper[4805]: I0216 20:56:42.718234 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:42 crc kubenswrapper[4805]: I0216 20:56:42.719092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:42 crc kubenswrapper[4805]: I0216 20:56:42.719133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:42 crc kubenswrapper[4805]: I0216 20:56:42.719146 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.410774 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.415926 4805 trace.go:236] Trace[1662957707]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 20:56:30.719) (total time: 12696ms): Feb 16 20:56:43 crc kubenswrapper[4805]: Trace[1662957707]: ---"Objects listed" error: 12696ms (20:56:43.415) Feb 16 20:56:43 crc kubenswrapper[4805]: Trace[1662957707]: [12.696619003s] [12.696619003s] END Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.415984 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.418824 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.423779 4805 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.423892 4805 trace.go:236] Trace[1061584746]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 20:56:32.721) (total time: 10701ms): Feb 16 20:56:43 crc kubenswrapper[4805]: Trace[1061584746]: ---"Objects listed" error: 10701ms (20:56:43.423) Feb 16 20:56:43 crc kubenswrapper[4805]: Trace[1061584746]: [10.701937656s] [10.701937656s] END Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.423932 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.424542 4805 trace.go:236] Trace[1492489956]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 20:56:30.760) (total time: 12664ms): Feb 16 20:56:43 crc kubenswrapper[4805]: Trace[1492489956]: ---"Objects listed" error: 12664ms (20:56:43.424) Feb 16 20:56:43 crc kubenswrapper[4805]: Trace[1492489956]: [12.664254279s] [12.664254279s] END Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.424603 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.426709 4805 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.451447 4805 csr.go:261] certificate signing request csr-kt2lt is approved, waiting to be issued Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.475435 4805 csr.go:257] certificate signing request csr-kt2lt is issued Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.504229 4805 apiserver.go:52] "Watching apiserver" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.549393 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:44:14.419847458 +0000 UTC Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.650515 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.651199 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.651865 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.651938 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.652191 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.652514 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.652556 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.652612 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.652640 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.652684 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.653530 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.662054 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.663046 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.663574 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.664074 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.665181 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.665547 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.665766 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.669567 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.671565 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.742045 4805 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.760700 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.775493 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.792211 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.813414 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.826921 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.826976 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827001 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827039 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827078 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827102 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827131 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827153 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827179 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827225 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827270 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827295 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827316 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827338 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827360 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827383 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827519 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827537 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827813 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827810 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827965 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827986 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828136 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828265 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.827604 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828380 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828374 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828433 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828637 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828752 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828933 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829086 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.828827 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829228 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829260 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829283 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829307 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829356 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829383 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829407 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829435 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829461 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829551 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829590 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829619 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829671 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829673 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829699 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829757 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829803 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829833 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829856 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829880 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829888 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829910 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829916 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.829935 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830008 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830036 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830058 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830081 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830100 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830134 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830150 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830190 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830193 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830211 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830225 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830231 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830291 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830328 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830350 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830368 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830387 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830406 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830428 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830474 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830494 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830512 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830532 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830549 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830567 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830587 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830607 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830624 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830630 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830647 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830689 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830709 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830751 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830774 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830792 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830811 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830831 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830848 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830870 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830891 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830908 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830910 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830926 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831030 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831091 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.830988 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831349 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831377 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831398 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831428 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831522 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831553 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831581 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831608 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831632 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831662 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831707 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831752 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831780 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831793 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831805 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831830 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831855 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831880 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831907 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831927 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831949 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831972 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831989 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.831998 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832039 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832071 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832103 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832185 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832209 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832236 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832263 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832286 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832306 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832327 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832348 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832362 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832370 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832452 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832483 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832513 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832537 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832562 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832582 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832601 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832675 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832702 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832713 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832784 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832861 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832893 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832919 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832942 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832963 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.832987 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833011 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833034 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833055 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833080 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833101 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833129 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833133 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833154 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833179 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833205 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833232 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833253 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833256 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833310 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833335 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833362 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833383 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833404 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833423 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833442 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833628 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833648 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833673 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833698 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833625 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833736 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833757 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833780 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833798 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833816 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833835 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833855 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833872 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833900 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833921 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833938 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833956 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833974 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833995 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834015 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834034 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834054 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834071 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834093 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834115 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834138 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834154 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834173 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834237 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834255 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834296 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834313 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834330 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834347 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834364 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834390 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834408 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834426 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834445 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834488 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834507 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834622 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834678 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834702 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834721 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834758 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834777 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834798 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834906 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835007 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835020 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835032 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835044 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835056 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835065 4805 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835075 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835448 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835461 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835475 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835490 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835504 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835518 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835530 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835544 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835556 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835568 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835579 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835588 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835597 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835608 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835617 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835628 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835638 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835647 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835656 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835666 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835676 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835686 4805 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835695 4805 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.858069 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.859760 4805 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.882563 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.883536 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833597 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833572 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.833654 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834005 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834078 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.834352 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835152 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835559 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.835944 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.836630 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.836573 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.836995 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.837441 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.837424 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.839380 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.839704 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.840053 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.837709 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.840796 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.841022 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.847553 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.849027 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:44.348982327 +0000 UTC m=+22.167665632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.888831 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.888965 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.889096 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.889345 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.889399 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.889703 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.889805 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.890027 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.890135 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.890341 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.890362 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.891220 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.891240 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.849646 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.849647 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.849702 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.850625 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.850646 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.851138 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.851251 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.851436 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.891589 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.891701 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:44.391665301 +0000 UTC m=+22.210348606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.891924 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.891998 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.892059 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.892289 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.892465 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.892753 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.849059 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.851933 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.852040 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.852256 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.859107 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.859221 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.863875 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.891484 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.893148 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.893192 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.894037 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.894370 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.894744 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.898414 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.898836 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.851607 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.898980 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.899019 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.899181 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.899269 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:44.399242675 +0000 UTC m=+22.217925970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.902758 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.905092 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.905133 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.905718 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.907548 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.908048 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.909706 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.910205 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.925589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.926286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.926311 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.926553 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.926625 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.926706 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.926774 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.926809 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.926925 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:44.426892422 +0000 UTC m=+22.245575717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:43 crc kubenswrapper[4805]: E0216 20:56:43.927013 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:44.427003235 +0000 UTC m=+22.245686520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.930847 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.938259 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.938817 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939040 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939129 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939142 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939153 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939163 4805 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939254 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939266 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939276 4805 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939286 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939297 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939306 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939316 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939324 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939336 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939345 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939355 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939364 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939374 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939383 4805 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939395 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939404 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939414 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939624 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939630 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.938454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.939762 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941244 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941303 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941318 4805 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941328 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941339 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941350 4805 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941361 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941370 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941382 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941393 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941403 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941413 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941422 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941431 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941462 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941473 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941484 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941494 4805 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941505 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941516 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941535 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941546 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941555 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941567 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941576 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941585 4805 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941594 4805 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941604 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941615 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941627 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941636 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941646 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941656 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941665 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941676 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941687 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941698 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941710 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941742 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941756 4805 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941766 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941778 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941791 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941805 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941818 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941832 4805 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941844 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941855 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941866 4805 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941875 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.941988 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.949435 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.949538 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.949517 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.949740 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.952079 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.952121 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.952155 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.952229 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.957221 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.958633 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.965109 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.965368 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.965699 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:43 crc kubenswrapper[4805]: I0216 20:56:43.990403 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.043765 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.044007 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.044172 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.044262 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.044348 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.044416 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.044680 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.044904 4805 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.045026 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.045163 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.045244 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.045324 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.045402 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.045610 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.045696 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.045808 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.063398 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.063403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.063617 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.063896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.064274 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.077500 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.084900 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.085016 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.085139 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.085384 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.092331 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.098039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.104810 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.116643 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.143516 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.144611 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.144879 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.144995 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.145027 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.146416 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148024 4805 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148053 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148075 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148092 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148108 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148124 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148140 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148155 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148172 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148187 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148203 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148218 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148234 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148250 4805 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.148268 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.207662 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.207755 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.207914 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.208060 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.208474 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.208642 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.209019 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.209106 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.209137 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.209329 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.209495 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.209649 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.210204 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.210484 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.210546 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.211900 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.211968 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.246517 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.246528 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.246684 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.246942 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.247134 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.247130 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.247270 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.247357 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.247408 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.247431 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.247711 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.249155 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.249405 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.249420 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.249502 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.249572 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.249679 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.249748 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: W0216 20:56:44.249783 4805 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~projected/kube-api-access-d4lsv Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.249804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.249805 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: W0216 20:56:44.249819 4805 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes/kubernetes.io~configmap/env-overrides Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250183 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250200 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250213 4805 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250223 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250233 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250243 4805 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250254 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250263 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250273 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250283 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250295 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250305 4805 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250316 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250326 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250337 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250348 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250358 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250367 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250181 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250376 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250433 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250443 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250480 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250490 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250502 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250512 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250522 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250532 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250540 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250549 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250560 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250570 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250581 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.250702 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.251311 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.251863 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: W0216 20:56:44.251963 4805 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes/kubernetes.io~secret/profile-collector-cert Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.251996 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.351804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.352119 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.352093938 +0000 UTC m=+23.170777233 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.352171 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.352183 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.352193 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.352204 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.352212 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.352220 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.377919 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.378307 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.378307 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.378506 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.378587 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.379051 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.379111 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.379146 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.379901 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.380754 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.380783 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.380914 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.381154 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.381328 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.381381 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.381452 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.381501 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.385125 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.387855 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.388952 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.405579 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.406028 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.450887 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.452989 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453058 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453118 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453158 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453176 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453190 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453203 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453217 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453231 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453245 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453259 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453272 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453318 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453361 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453386 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453401 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453468 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453476 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453518 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453534 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453555 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.453530727 +0000 UTC m=+23.272214042 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453569 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453646 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.453624619 +0000 UTC m=+23.272307914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453674 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453710 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453742 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453770 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.453761613 +0000 UTC m=+23.272444898 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453743 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.453812 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:45.453788664 +0000 UTC m=+23.272472169 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453851 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453869 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453884 4805 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453894 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453904 4805 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453913 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453924 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.453934 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.455529 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.460189 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.469820 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.476753 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 20:51:43 +0000 UTC, rotation deadline is 2026-11-10 04:14:04.333126904 +0000 UTC Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.476822 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6391h17m19.856307858s for next certificate rotation Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.550509 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:12:15.266797423 +0000 UTC Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.554463 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.554496 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.554508 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.580447 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.730674 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.731070 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.732303 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a" exitCode=255 Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.732373 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a"} Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.732454 4805 scope.go:117] "RemoveContainer" containerID="c0c04aedaf994ef8b11bc9b28d10106277fa38ee801f4df07a8d812ca6f0320c" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.750354 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b1eb72a5d6da8fdde1ca74dffd6a7fd80b64af447fc3d8b1ad8cccebe0c617dd"} Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.753487 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1b2a87f0c08c5c191f89874b714e83906d48bd266f25c81a839c823249af5115"} Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.754922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c3f7b08b13c6bfcdf2a1e9eac5c5e8987c734195253b6cda5d67d0590da7cb6f"} Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.759911 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.772517 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.783793 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.802528 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.831017 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-48h2w"] Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.831577 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-48h2w" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.835096 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.844238 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.844254 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.844549 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.855946 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/368e42ff-95cf-460e-84c6-ae9aeb3f8657-hosts-file\") pod \"node-resolver-48h2w\" (UID: \"368e42ff-95cf-460e-84c6-ae9aeb3f8657\") " pod="openshift-dns/node-resolver-48h2w" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.856016 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbwm9\" (UniqueName: \"kubernetes.io/projected/368e42ff-95cf-460e-84c6-ae9aeb3f8657-kube-api-access-lbwm9\") pod \"node-resolver-48h2w\" (UID: \"368e42ff-95cf-460e-84c6-ae9aeb3f8657\") " pod="openshift-dns/node-resolver-48h2w" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.860850 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.884938 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.896451 4805 scope.go:117] "RemoveContainer" containerID="3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a" Feb 16 20:56:44 crc kubenswrapper[4805]: E0216 20:56:44.896741 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.905043 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.905191 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.945561 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.956198 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.956745 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/368e42ff-95cf-460e-84c6-ae9aeb3f8657-hosts-file\") pod \"node-resolver-48h2w\" (UID: \"368e42ff-95cf-460e-84c6-ae9aeb3f8657\") " pod="openshift-dns/node-resolver-48h2w" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.956811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbwm9\" (UniqueName: \"kubernetes.io/projected/368e42ff-95cf-460e-84c6-ae9aeb3f8657-kube-api-access-lbwm9\") pod \"node-resolver-48h2w\" (UID: \"368e42ff-95cf-460e-84c6-ae9aeb3f8657\") " pod="openshift-dns/node-resolver-48h2w" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.956979 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/368e42ff-95cf-460e-84c6-ae9aeb3f8657-hosts-file\") pod \"node-resolver-48h2w\" (UID: \"368e42ff-95cf-460e-84c6-ae9aeb3f8657\") " pod="openshift-dns/node-resolver-48h2w" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.975854 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.983687 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbwm9\" (UniqueName: \"kubernetes.io/projected/368e42ff-95cf-460e-84c6-ae9aeb3f8657-kube-api-access-lbwm9\") pod \"node-resolver-48h2w\" (UID: \"368e42ff-95cf-460e-84c6-ae9aeb3f8657\") " pod="openshift-dns/node-resolver-48h2w" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.987040 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:44 crc kubenswrapper[4805]: I0216 20:56:44.995306 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.144795 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-48h2w" Feb 16 20:56:45 crc kubenswrapper[4805]: W0216 20:56:45.160407 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368e42ff_95cf_460e_84c6_ae9aeb3f8657.slice/crio-0934635c17d7bb805a46c9b0159949f5c809dc8fb3b7c86a9afbc54cec5729e3 WatchSource:0}: Error finding container 0934635c17d7bb805a46c9b0159949f5c809dc8fb3b7c86a9afbc54cec5729e3: Status 404 returned error can't find the container with id 0934635c17d7bb805a46c9b0159949f5c809dc8fb3b7c86a9afbc54cec5729e3 Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.232329 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gq8qd"] Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.233069 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8qwfz"] Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.233421 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.233528 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.233419 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-wmh7d"] Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.234394 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.239256 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.239338 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.239444 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.239736 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.239816 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.239870 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.239826 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.240007 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.240144 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.240049 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.240100 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.240099 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.259545 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.260025 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-os-release\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.260181 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-daemon-config\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.260339 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-cnibin\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.260457 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-run-k8s-cni-cncf-io\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.260573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-system-cni-dir\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.260701 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-run-netns\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.261454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-run-multus-certs\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.261626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-var-lib-kubelet\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.261799 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00c308fa-9d36-4fec-8717-6dbbe57523c6-mcd-auth-proxy-config\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.261943 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5f4z\" (UniqueName: \"kubernetes.io/projected/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-kube-api-access-m5f4z\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.262108 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.262253 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-hostroot\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.262391 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-conf-dir\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.262541 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff5kh\" (UniqueName: \"kubernetes.io/projected/00c308fa-9d36-4fec-8717-6dbbe57523c6-kube-api-access-ff5kh\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.262701 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-cni-dir\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.263257 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-socket-dir-parent\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.263487 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00c308fa-9d36-4fec-8717-6dbbe57523c6-proxy-tls\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.265848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-system-cni-dir\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.265965 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-cni-binary-copy\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.266066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-cni-binary-copy\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.266181 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-cnibin\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.266313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.266479 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-etc-kubernetes\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.266642 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhj6l\" (UniqueName: \"kubernetes.io/projected/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-kube-api-access-dhj6l\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.266834 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-var-lib-cni-bin\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.267480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-var-lib-cni-multus\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.267631 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00c308fa-9d36-4fec-8717-6dbbe57523c6-rootfs\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.267847 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-os-release\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.281460 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.293074 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.317563 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c04aedaf994ef8b11bc9b28d10106277fa38ee801f4df07a8d812ca6f0320c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:37Z\\\",\\\"message\\\":\\\"W0216 20:56:26.896088 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:26.896489 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275386 cert, and key in /tmp/serving-cert-2166042406/serving-signer.crt, /tmp/serving-cert-2166042406/serving-signer.key\\\\nI0216 20:56:27.163594 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:27.164036 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:27.164122 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:27.166406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2166042406/tls.crt::/tmp/serving-cert-2166042406/tls.key\\\\\\\"\\\\nF0216 20:56:37.609825 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.335417 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.359150 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.368935 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369047 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhj6l\" (UniqueName: \"kubernetes.io/projected/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-kube-api-access-dhj6l\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369076 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-var-lib-cni-bin\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369095 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-var-lib-cni-multus\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369115 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00c308fa-9d36-4fec-8717-6dbbe57523c6-rootfs\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369148 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-os-release\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-os-release\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369193 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-daemon-config\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369212 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-cnibin\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-run-k8s-cni-cncf-io\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-system-cni-dir\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369262 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-run-netns\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369277 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-run-multus-certs\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369300 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-var-lib-kubelet\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369317 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00c308fa-9d36-4fec-8717-6dbbe57523c6-mcd-auth-proxy-config\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5f4z\" (UniqueName: \"kubernetes.io/projected/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-kube-api-access-m5f4z\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369354 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-hostroot\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369384 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-conf-dir\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369404 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff5kh\" (UniqueName: \"kubernetes.io/projected/00c308fa-9d36-4fec-8717-6dbbe57523c6-kube-api-access-ff5kh\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369419 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-cni-dir\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369435 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-socket-dir-parent\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00c308fa-9d36-4fec-8717-6dbbe57523c6-proxy-tls\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-system-cni-dir\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-cni-binary-copy\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369517 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-cni-binary-copy\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369532 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-cnibin\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369549 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369566 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-etc-kubernetes\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369635 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-etc-kubernetes\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.369750 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:47.369707883 +0000 UTC m=+25.188391178 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.369982 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-system-cni-dir\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370074 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-cnibin\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-run-k8s-cni-cncf-io\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370135 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-hostroot\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-conf-dir\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370534 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-system-cni-dir\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370774 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00c308fa-9d36-4fec-8717-6dbbe57523c6-mcd-auth-proxy-config\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370844 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-cni-dir\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-var-lib-kubelet\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-run-multus-certs\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370928 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00c308fa-9d36-4fec-8717-6dbbe57523c6-rootfs\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370902 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-cnibin\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370899 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-run-netns\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370962 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-var-lib-cni-bin\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.370982 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-host-var-lib-cni-multus\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.371327 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.371460 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-os-release\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.371537 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-os-release\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.371587 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-cni-binary-copy\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.371650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-socket-dir-parent\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.371751 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-cni-binary-copy\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.371859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.371897 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-multus-daemon-config\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.372923 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.409602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00c308fa-9d36-4fec-8717-6dbbe57523c6-proxy-tls\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.412307 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.416955 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhj6l\" (UniqueName: \"kubernetes.io/projected/7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2-kube-api-access-dhj6l\") pod \"multus-8qwfz\" (UID: \"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\") " pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.428995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff5kh\" (UniqueName: \"kubernetes.io/projected/00c308fa-9d36-4fec-8717-6dbbe57523c6-kube-api-access-ff5kh\") pod \"machine-config-daemon-gq8qd\" (UID: \"00c308fa-9d36-4fec-8717-6dbbe57523c6\") " pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.429375 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5f4z\" (UniqueName: \"kubernetes.io/projected/f8eef9cf-fd62-4c34-b4d1-2e1242bd437a-kube-api-access-m5f4z\") pod \"multus-additional-cni-plugins-wmh7d\" (UID: \"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\") " pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.433268 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.454399 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.470346 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.470405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.470432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.470461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.470691 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.470738 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.470757 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.470825 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:47.470803645 +0000 UTC m=+25.289486950 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.471312 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.471341 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.471339 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.471464 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:47.471437931 +0000 UTC m=+25.290121226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.471354 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.471539 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:47.471524564 +0000 UTC m=+25.290208029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.471320 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.471852 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:47.471807251 +0000 UTC m=+25.290490726 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.481910 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.497624 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.510963 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.524471 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.536668 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.547595 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8qwfz" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.551197 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:49:22.909479336 +0000 UTC Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.551992 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: W0216 20:56:45.559867 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c4f2ac8_1ae6_4215_8155_ea8cd17f07f2.slice/crio-9676594e2b0064ab4284bfca37f2cdff06e7874081a4aecd93368553da68a687 WatchSource:0}: Error finding container 9676594e2b0064ab4284bfca37f2cdff06e7874081a4aecd93368553da68a687: Status 404 returned error can't find the container with id 9676594e2b0064ab4284bfca37f2cdff06e7874081a4aecd93368553da68a687 Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.565064 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c04aedaf994ef8b11bc9b28d10106277fa38ee801f4df07a8d812ca6f0320c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:37Z\\\",\\\"message\\\":\\\"W0216 20:56:26.896088 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:26.896489 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275386 cert, and key in /tmp/serving-cert-2166042406/serving-signer.crt, /tmp/serving-cert-2166042406/serving-signer.key\\\\nI0216 20:56:27.163594 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:27.164036 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:27.164122 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:27.166406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2166042406/tls.crt::/tmp/serving-cert-2166042406/tls.key\\\\\\\"\\\\nF0216 20:56:37.609825 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.565299 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.569447 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.582794 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.595224 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: W0216 20:56:45.596865 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8eef9cf_fd62_4c34_b4d1_2e1242bd437a.slice/crio-959805bbb69f18cbf7120c7ab234c35e6dab5c44d0d29aef1b8f2b36b138e142 WatchSource:0}: Error finding container 959805bbb69f18cbf7120c7ab234c35e6dab5c44d0d29aef1b8f2b36b138e142: Status 404 returned error can't find the container with id 959805bbb69f18cbf7120c7ab234c35e6dab5c44d0d29aef1b8f2b36b138e142 Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.597160 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.597325 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.597529 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.597770 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.597664 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.598892 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:45 crc kubenswrapper[4805]: W0216 20:56:45.601003 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00c308fa_9d36_4fec_8717_6dbbe57523c6.slice/crio-4170952a56dd712db535dd33cfc1ae6dbae346d65ddd2bdb90293cfdc83f1431 WatchSource:0}: Error finding container 4170952a56dd712db535dd33cfc1ae6dbae346d65ddd2bdb90293cfdc83f1431: Status 404 returned error can't find the container with id 4170952a56dd712db535dd33cfc1ae6dbae346d65ddd2bdb90293cfdc83f1431 Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.604765 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.605281 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.606139 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.606562 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.607217 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.608183 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.608669 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.609425 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.610436 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.611040 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.613116 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.613706 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.615292 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.615774 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.616272 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.617220 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.617740 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.618746 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.619185 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.619744 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.620828 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.621278 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.622260 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.622690 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.623760 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.624204 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.625068 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.625429 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.626346 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.627236 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.628297 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.628841 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.635104 4805 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.635226 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.641581 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.642405 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.643040 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.645643 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.647416 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.650033 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.651577 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.652503 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.654193 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.655294 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.656707 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.658222 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.660230 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.660910 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.662106 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.663518 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.664163 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.665506 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.666767 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.667408 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.668795 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.669367 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.669952 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-crk96"] Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.671216 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.674786 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.675023 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.675273 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.675399 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.675598 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.675740 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.675935 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.694863 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.718810 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.733835 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.756245 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c04aedaf994ef8b11bc9b28d10106277fa38ee801f4df07a8d812ca6f0320c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:37Z\\\",\\\"message\\\":\\\"W0216 20:56:26.896088 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 20:56:26.896489 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771275386 cert, and key in /tmp/serving-cert-2166042406/serving-signer.crt, /tmp/serving-cert-2166042406/serving-signer.key\\\\nI0216 20:56:27.163594 1 observer_polling.go:159] Starting file observer\\\\nW0216 20:56:27.164036 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 20:56:27.164122 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:27.166406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2166042406/tls.crt::/tmp/serving-cert-2166042406/tls.key\\\\\\\"\\\\nF0216 20:56:37.609825 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.761534 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.764246 4805 scope.go:117] "RemoveContainer" containerID="3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a" Feb 16 20:56:45 crc kubenswrapper[4805]: E0216 20:56:45.764414 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.765370 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e"} Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.765431 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0"} Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.767468 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc"} Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.767545 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.772591 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-netns\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.772678 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"4170952a56dd712db535dd33cfc1ae6dbae346d65ddd2bdb90293cfdc83f1431"} Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.772696 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-systemd-units\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.772879 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-var-lib-openvswitch\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.772963 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-etc-openvswitch\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773033 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-log-socket\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773148 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-ovn\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773223 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773287 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-config\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773355 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-kubelet\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773459 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-openvswitch\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773558 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stvx\" (UniqueName: \"kubernetes.io/projected/8719b45e-eed5-4265-87de-46967022148f-kube-api-access-6stvx\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773630 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-systemd\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773698 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-ovn-kubernetes\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773799 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-env-overrides\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773882 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-script-lib\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.773949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-slash\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.774012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8719b45e-eed5-4265-87de-46967022148f-ovn-node-metrics-cert\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.774105 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-node-log\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.774247 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-netd\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.774318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-bin\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.775512 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerStarted","Data":"959805bbb69f18cbf7120c7ab234c35e6dab5c44d0d29aef1b8f2b36b138e142"} Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.778867 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qwfz" event={"ID":"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2","Type":"ContainerStarted","Data":"cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a"} Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.778925 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qwfz" event={"ID":"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2","Type":"ContainerStarted","Data":"9676594e2b0064ab4284bfca37f2cdff06e7874081a4aecd93368553da68a687"} Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.780493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-48h2w" event={"ID":"368e42ff-95cf-460e-84c6-ae9aeb3f8657","Type":"ContainerStarted","Data":"3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422"} Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.780529 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-48h2w" event={"ID":"368e42ff-95cf-460e-84c6-ae9aeb3f8657","Type":"ContainerStarted","Data":"0934635c17d7bb805a46c9b0159949f5c809dc8fb3b7c86a9afbc54cec5729e3"} Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.782081 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.797837 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.816752 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.863218 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-systemd-units\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-systemd-units\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879409 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-var-lib-openvswitch\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879378 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-var-lib-openvswitch\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-etc-openvswitch\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879473 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-log-socket\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879491 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-config\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879524 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-kubelet\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879544 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-openvswitch\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-ovn\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879584 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879609 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6stvx\" (UniqueName: \"kubernetes.io/projected/8719b45e-eed5-4265-87de-46967022148f-kube-api-access-6stvx\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879662 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-systemd\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879681 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-ovn-kubernetes\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.879704 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-env-overrides\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-script-lib\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880254 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-slash\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-node-log\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880358 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-netd\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880381 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8719b45e-eed5-4265-87de-46967022148f-ovn-node-metrics-cert\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880409 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-bin\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880461 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-bin\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880519 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-netns\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880632 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-ovn\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880675 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880895 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-netns\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-systemd\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880943 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-etc-openvswitch\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880965 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-log-socket\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.880360 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-env-overrides\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.881010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-node-log\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.881020 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-ovn-kubernetes\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.881057 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-netd\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.881074 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-slash\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.881095 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-script-lib\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.881279 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-kubelet\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.881304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-openvswitch\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.881598 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-config\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.896424 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8719b45e-eed5-4265-87de-46967022148f-ovn-node-metrics-cert\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.896850 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.922102 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6stvx\" (UniqueName: \"kubernetes.io/projected/8719b45e-eed5-4265-87de-46967022148f-kube-api-access-6stvx\") pod \"ovnkube-node-crk96\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.944310 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.975037 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:45 crc kubenswrapper[4805]: I0216 20:56:45.994771 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:45Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.051773 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.060431 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: W0216 20:56:46.069044 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8719b45e_eed5_4265_87de_46967022148f.slice/crio-dacdb696bc440d7cb506367c083fc5b450da326cd9c506b87c1c65ba60abb8ad WatchSource:0}: Error finding container dacdb696bc440d7cb506367c083fc5b450da326cd9c506b87c1c65ba60abb8ad: Status 404 returned error can't find the container with id dacdb696bc440d7cb506367c083fc5b450da326cd9c506b87c1c65ba60abb8ad Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.093172 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.110525 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.126669 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.144811 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.161743 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.182011 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.196796 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.209211 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.230102 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.244877 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.552305 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:45:23.067005568 +0000 UTC Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.784779 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957" exitCode=0 Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.784853 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957"} Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.784909 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"dacdb696bc440d7cb506367c083fc5b450da326cd9c506b87c1c65ba60abb8ad"} Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.787264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794"} Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.787350 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4"} Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.789110 4805 generic.go:334] "Generic (PLEG): container finished" podID="f8eef9cf-fd62-4c34-b4d1-2e1242bd437a" containerID="f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9" exitCode=0 Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.789286 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerDied","Data":"f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9"} Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.806240 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.829181 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.842787 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.856932 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.877027 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.891033 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.911690 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.924081 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.941705 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.961553 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:46 crc kubenswrapper[4805]: I0216 20:56:46.975692 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:46Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.005932 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.029923 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.049653 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.066529 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.067878 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.068883 4805 scope.go:117] "RemoveContainer" containerID="3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a" Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.069142 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.130191 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.144088 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.160127 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.177449 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.193040 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.218181 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.237677 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.255416 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.301137 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.429973 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.430247 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:51.43022093 +0000 UTC m=+29.248904225 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.531565 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.531617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.531648 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.531687 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.531923 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.531950 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.531964 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.532020 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:51.532002439 +0000 UTC m=+29.350685734 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.532120 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.532137 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.532144 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.532167 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:51.532160813 +0000 UTC m=+29.350844108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.532204 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.532236 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:51.532224105 +0000 UTC m=+29.350907400 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.532355 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.532392 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:51.532383959 +0000 UTC m=+29.351067254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.553201 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:31:46.663788211 +0000 UTC Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.596626 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-c5pjk"] Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.599065 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.603272 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.603400 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.603551 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.603574 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.603714 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.603759 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.603866 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 20:56:47 crc kubenswrapper[4805]: E0216 20:56:47.604450 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.606521 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.618021 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.619770 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.644557 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.666797 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.689856 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.706366 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.733426 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.733676 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/14b20786-6d22-491c-9054-ae32a4f25efd-serviceca\") pod \"node-ca-c5pjk\" (UID: \"14b20786-6d22-491c-9054-ae32a4f25efd\") " pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.733745 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfhzn\" (UniqueName: \"kubernetes.io/projected/14b20786-6d22-491c-9054-ae32a4f25efd-kube-api-access-dfhzn\") pod \"node-ca-c5pjk\" (UID: \"14b20786-6d22-491c-9054-ae32a4f25efd\") " pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.733809 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14b20786-6d22-491c-9054-ae32a4f25efd-host\") pod \"node-ca-c5pjk\" (UID: \"14b20786-6d22-491c-9054-ae32a4f25efd\") " pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.761517 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.781412 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.798276 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257"} Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.802365 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerStarted","Data":"faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9"} Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.803707 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.810900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325"} Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.810960 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba"} Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.810971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c"} Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.810982 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68"} Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.810996 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42"} Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.811009 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5"} Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.823733 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.834423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/14b20786-6d22-491c-9054-ae32a4f25efd-serviceca\") pod \"node-ca-c5pjk\" (UID: \"14b20786-6d22-491c-9054-ae32a4f25efd\") " pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.834492 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfhzn\" (UniqueName: \"kubernetes.io/projected/14b20786-6d22-491c-9054-ae32a4f25efd-kube-api-access-dfhzn\") pod \"node-ca-c5pjk\" (UID: \"14b20786-6d22-491c-9054-ae32a4f25efd\") " pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.834531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14b20786-6d22-491c-9054-ae32a4f25efd-host\") pod \"node-ca-c5pjk\" (UID: \"14b20786-6d22-491c-9054-ae32a4f25efd\") " pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.834751 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14b20786-6d22-491c-9054-ae32a4f25efd-host\") pod \"node-ca-c5pjk\" (UID: \"14b20786-6d22-491c-9054-ae32a4f25efd\") " pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.835453 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/14b20786-6d22-491c-9054-ae32a4f25efd-serviceca\") pod \"node-ca-c5pjk\" (UID: \"14b20786-6d22-491c-9054-ae32a4f25efd\") " pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.844090 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.854362 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfhzn\" (UniqueName: \"kubernetes.io/projected/14b20786-6d22-491c-9054-ae32a4f25efd-kube-api-access-dfhzn\") pod \"node-ca-c5pjk\" (UID: \"14b20786-6d22-491c-9054-ae32a4f25efd\") " pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.858117 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.870668 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.889891 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.912401 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.912904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c5pjk" Feb 16 20:56:47 crc kubenswrapper[4805]: W0216 20:56:47.930203 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b20786_6d22_491c_9054_ae32a4f25efd.slice/crio-147a18991430d35b99072289325ce5d63fa4a8975b2ef19e81170d0ced5a5477 WatchSource:0}: Error finding container 147a18991430d35b99072289325ce5d63fa4a8975b2ef19e81170d0ced5a5477: Status 404 returned error can't find the container with id 147a18991430d35b99072289325ce5d63fa4a8975b2ef19e81170d0ced5a5477 Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.937009 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.950020 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:47 crc kubenswrapper[4805]: I0216 20:56:47.976215 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.004773 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.027806 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.045974 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.056047 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.076941 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.088619 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.101614 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.122123 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.553853 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:03:46.358961962 +0000 UTC Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.819714 4805 generic.go:334] "Generic (PLEG): container finished" podID="f8eef9cf-fd62-4c34-b4d1-2e1242bd437a" containerID="faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9" exitCode=0 Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.819844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerDied","Data":"faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9"} Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.820822 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c5pjk" event={"ID":"14b20786-6d22-491c-9054-ae32a4f25efd","Type":"ContainerStarted","Data":"147a18991430d35b99072289325ce5d63fa4a8975b2ef19e81170d0ced5a5477"} Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.850794 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.867333 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.888059 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.906698 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.920791 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.936096 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.951783 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.967371 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:48 crc kubenswrapper[4805]: I0216 20:56:48.988651 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.005635 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.023466 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.040370 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.053550 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.554602 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:58:09.863861513 +0000 UTC Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.596778 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.596823 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.596778 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.596927 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.596986 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.597085 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.605697 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.614113 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.626457 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.637379 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.658098 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.676159 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.704296 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.719058 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.733720 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.748147 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.766483 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.781182 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.793686 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.807834 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.819371 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.821535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.821579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.821591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.821770 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.825862 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.826886 4805 generic.go:334] "Generic (PLEG): container finished" podID="f8eef9cf-fd62-4c34-b4d1-2e1242bd437a" containerID="06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d" exitCode=0 Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.826996 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerDied","Data":"06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d"} Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.829562 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c5pjk" event={"ID":"14b20786-6d22-491c-9054-ae32a4f25efd","Type":"ContainerStarted","Data":"0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb"} Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.835416 4805 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.835764 4805 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.836917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.836964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.836983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.837011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.837031 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.837334 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.857178 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.858892 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.862612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.862665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.862680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.862703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.862747 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.869962 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.881925 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.883843 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.887351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.887393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.887405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.887424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.887436 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.899950 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.907992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.908033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.908073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.908094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.908109 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.925088 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.936745 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.938360 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.941372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.941424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.941438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.941462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.941476 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.953556 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.954510 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: E0216 20:56:49.954619 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.957072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.957103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.957114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.957771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.957862 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:49Z","lastTransitionTime":"2026-02-16T20:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.969888 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:49 crc kubenswrapper[4805]: I0216 20:56:49.988336 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.020314 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.036174 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.054464 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.060501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.060534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.060547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.060569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.060582 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.069699 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.084472 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.097452 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.113299 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.162831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.162875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.162900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.162919 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.162944 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.266098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.266155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.266167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.266192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.266206 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.368912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.368976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.368994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.369021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.369038 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.472470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.472523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.472536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.472554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.472565 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.554792 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:08:11.555829213 +0000 UTC Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.574899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.574930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.574940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.574957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.574966 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.676921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.676982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.676996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.677028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.677046 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.780473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.781234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.781357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.781486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.781614 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.839276 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.842256 4805 generic.go:334] "Generic (PLEG): container finished" podID="f8eef9cf-fd62-4c34-b4d1-2e1242bd437a" containerID="947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c" exitCode=0 Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.842450 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerDied","Data":"947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.858250 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.871469 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.886897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.886958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.886975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.887000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.887021 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.892011 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.909382 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.928172 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.943857 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.961443 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.978064 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.996141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.996258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.996272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.996095 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:50Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.996297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:50 crc kubenswrapper[4805]: I0216 20:56:50.996512 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:50Z","lastTransitionTime":"2026-02-16T20:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.019170 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.040357 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.054919 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.078342 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.093512 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.099247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.099279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.099290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.099306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.099316 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.204377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.204428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.204439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.204458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.204469 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.307070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.307152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.307174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.307206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.307229 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.412339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.412397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.412413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.412434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.412449 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.476912 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.477211 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.477162829 +0000 UTC m=+37.295846114 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.515437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.515487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.515506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.515530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.515548 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.555593 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:44:01.166539165 +0000 UTC Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.578458 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.578535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.578590 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.578632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.578774 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.578809 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.578833 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.578834 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.578874 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.578914 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.578923 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.578897168 +0000 UTC m=+37.397580493 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.578971 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.579005 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.578974 +0000 UTC m=+37.397657305 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.579068 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.579042321 +0000 UTC m=+37.397725656 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.579135 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.579262 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.579238787 +0000 UTC m=+37.397922122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.596871 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.596874 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.597094 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.597048 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.597188 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:51 crc kubenswrapper[4805]: E0216 20:56:51.597245 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.620300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.620356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.620372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.620398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.620417 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.723986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.724057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.724078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.724107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.724126 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.827441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.827486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.827500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.827520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.827533 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.851828 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerStarted","Data":"18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.869571 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.896824 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.920642 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.936089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.936162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.936180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.936208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.936227 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:51Z","lastTransitionTime":"2026-02-16T20:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.940919 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.961574 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:51 crc kubenswrapper[4805]: I0216 20:56:51.983496 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.005873 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.025331 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.041051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.041105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.041123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.041144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.041157 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.045159 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.075499 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.094964 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.113341 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.128033 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.145001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.145038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.145047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.145066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.145076 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.148343 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.247934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.247982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.247993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.248013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.248025 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.350959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.350999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.351009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.351028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.351038 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.454533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.454586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.454599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.454618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.454631 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.555819 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:42:52.200147805 +0000 UTC Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.557293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.557326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.557335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.557354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.557364 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.660567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.660602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.660611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.660630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.660641 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.762821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.762873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.762882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.762901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.762913 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.864668 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.864760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.865047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.865061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.865077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.865092 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.882987 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.895920 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.922226 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.944344 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.962703 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.967144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.967194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.967209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.967231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.967245 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:52Z","lastTransitionTime":"2026-02-16T20:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:52 crc kubenswrapper[4805]: I0216 20:56:52.983943 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.011940 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:52Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.024711 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.044993 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.057236 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.069663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.069694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.069731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.069750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.069760 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.074382 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.088916 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.103674 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.120279 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.172291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.172318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.172326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.172342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.172351 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.275167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.275223 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.275239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.275260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.275274 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.357668 4805 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.382856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.382920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.382939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.382967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.382985 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.486140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.486195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.486204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.486225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.486235 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.556712 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:38:43.03375436 +0000 UTC Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.588634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.588675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.588685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.588703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.588728 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.597033 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.597034 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.597126 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:53 crc kubenswrapper[4805]: E0216 20:56:53.597212 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:53 crc kubenswrapper[4805]: E0216 20:56:53.597137 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:53 crc kubenswrapper[4805]: E0216 20:56:53.597366 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.612042 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.624488 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.638651 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.652970 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.666917 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.681129 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.691590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.691656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.691668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.691688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.691704 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.697201 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.714907 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.740781 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.759145 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.786746 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.794401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.794576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.794685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.795010 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.795108 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.800679 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.826270 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.854258 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.873951 4805 generic.go:334] "Generic (PLEG): container finished" podID="f8eef9cf-fd62-4c34-b4d1-2e1242bd437a" containerID="18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068" exitCode=0 Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.874084 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerDied","Data":"18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.874137 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.874215 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.874637 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.900709 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.905315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.905459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.905556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.905645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.905759 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:53Z","lastTransitionTime":"2026-02-16T20:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.919942 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.933426 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.933919 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.943873 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.952568 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.969012 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:53 crc kubenswrapper[4805]: I0216 20:56:53.985384 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.000801 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.008540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.008583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.008603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.008621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.008632 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.015542 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.032605 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.053566 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.067189 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.083093 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.095820 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.114261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.114615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.114630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.114650 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.114666 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.125170 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.143743 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.174116 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.190848 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.202556 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.214879 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.223020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.223057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.223067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.223084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.223094 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.234983 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.258050 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.277209 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.288779 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.307449 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.322809 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.326848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.326902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.326921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.326948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.326967 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.337361 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.351939 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.373449 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.430754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.430796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.430827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.430846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.430858 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.533131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.533174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.533186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.533209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.533222 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.557564 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:45:31.308093192 +0000 UTC Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.636169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.636448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.636558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.636645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.636703 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.740255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.740601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.740680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.740807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.740867 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.846561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.846624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.846639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.846667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.846685 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.883744 4805 generic.go:334] "Generic (PLEG): container finished" podID="f8eef9cf-fd62-4c34-b4d1-2e1242bd437a" containerID="b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143" exitCode=0 Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.883930 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.884690 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerDied","Data":"b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.900526 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.926000 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.939167 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.953304 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.958229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.958267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.958276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.958295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.958305 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:54Z","lastTransitionTime":"2026-02-16T20:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.967047 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:54 crc kubenswrapper[4805]: I0216 20:56:54.989545 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:54Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.010355 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.027468 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.043819 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.060551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.060589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.060601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.060619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.060631 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.069055 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.093895 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.110055 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.128056 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.151931 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.163378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.163431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.163440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.163458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.163471 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.266028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.266083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.266095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.266116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.266129 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.369583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.369646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.369657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.369677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.369689 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.472599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.472663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.472680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.472704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.472761 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.559779 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:37:15.788730953 +0000 UTC Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.575621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.575677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.575696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.575756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.575778 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.600279 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:55 crc kubenswrapper[4805]: E0216 20:56:55.600433 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.600828 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.600954 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:55 crc kubenswrapper[4805]: E0216 20:56:55.601067 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:55 crc kubenswrapper[4805]: E0216 20:56:55.601157 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.677772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.677805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.677814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.677830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.677840 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.780624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.780667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.780676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.780694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.780704 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.883814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.883857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.883867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.883885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.883897 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.897433 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" event={"ID":"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a","Type":"ContainerStarted","Data":"ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3"} Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.897636 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.921300 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.943184 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.959116 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.980399 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:55Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.986674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.986789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.986821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.986849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:55 crc kubenswrapper[4805]: I0216 20:56:55.986876 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:55Z","lastTransitionTime":"2026-02-16T20:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.002807 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.019764 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.037023 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.050282 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.066904 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.090048 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.090378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.090774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.090785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.090804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.090834 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.110457 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.126847 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.142526 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.159771 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:56Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.194295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.194375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.194394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.194422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.194440 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.297143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.297200 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.297217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.297244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.297259 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.401290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.401362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.401378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.401398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.401411 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.504071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.504126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.504138 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.504158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.504174 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.560149 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:34:40.5975371 +0000 UTC Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.607044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.607118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.607140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.607173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.607194 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.719371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.719430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.719441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.719462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.719475 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.821584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.821642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.821654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.821671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.821682 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.924606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.925082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.925096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.925119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:56 crc kubenswrapper[4805]: I0216 20:56:56.925133 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:56Z","lastTransitionTime":"2026-02-16T20:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.031648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.031687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.031697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.031729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.031739 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.134211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.134251 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.134260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.134277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.134288 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.237531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.237601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.237627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.237659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.237684 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.340153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.340211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.340229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.340255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.340274 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.443045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.443081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.443089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.443108 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.443122 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.547104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.547177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.547194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.547224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.547243 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.560528 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:23:59.778104268 +0000 UTC Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.597576 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.597636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.597636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:57 crc kubenswrapper[4805]: E0216 20:56:57.597786 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:57 crc kubenswrapper[4805]: E0216 20:56:57.597912 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:57 crc kubenswrapper[4805]: E0216 20:56:57.598086 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.650653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.650753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.650781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.650808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.650828 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.753631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.753673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.753685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.753712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.753770 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.844900 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj"] Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.845452 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.852242 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.852354 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.858425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.858499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.858524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.858566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.858595 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.870197 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.893052 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.906824 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/0.log" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.911710 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd" exitCode=1 Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.911791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.913165 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.913295 4805 scope.go:117] "RemoveContainer" containerID="56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.930939 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.953562 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.962430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.962474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.962485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.962506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.962517 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:57Z","lastTransitionTime":"2026-02-16T20:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.964188 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pln6n\" (UniqueName: \"kubernetes.io/projected/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-kube-api-access-pln6n\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.964251 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.964271 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.964298 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.975558 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:57 crc kubenswrapper[4805]: I0216 20:56:57.987034 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:57.999951 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:57Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.012220 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.030630 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.045954 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.061056 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.064887 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.064933 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.064996 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.065057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pln6n\" (UniqueName: \"kubernetes.io/projected/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-kube-api-access-pln6n\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.065055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.065109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.065123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.065147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.065163 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.066086 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.066452 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.073960 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.074824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.087343 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pln6n\" (UniqueName: \"kubernetes.io/projected/7ac6d346-d2d2-4ad6-a72f-7506b709bea0-kube-api-access-pln6n\") pod \"ovnkube-control-plane-749d76644c-2qdfj\" (UID: \"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.096457 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.112562 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.124695 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.135860 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.157371 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.167939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.168002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.168014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.168037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.168055 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.172097 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.175521 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: W0216 20:56:58.183957 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ac6d346_d2d2_4ad6_a72f_7506b709bea0.slice/crio-bd659442c4f9d1341569cab66dc39cc95a3af50400c89a505fdd45fbec1a2cc9 WatchSource:0}: Error finding container bd659442c4f9d1341569cab66dc39cc95a3af50400c89a505fdd45fbec1a2cc9: Status 404 returned error can't find the container with id bd659442c4f9d1341569cab66dc39cc95a3af50400c89a505fdd45fbec1a2cc9 Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.198379 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.216485 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.233328 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.252619 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.270945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.271299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.271407 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.271500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.271576 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.276340 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:56:56.721372 6114 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:56.721409 6114 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:56.721449 6114 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 20:56:56.721505 6114 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 20:56:56.721456 6114 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 20:56:56.721550 6114 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:56.721614 6114 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:56.721624 6114 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:56.721642 6114 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:56.721661 6114 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:56.721670 6114 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:56.721714 6114 factory.go:656] Stopping watch factory\\\\nI0216 20:56:56.721714 6114 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:56.721750 6114 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:56.721754 6114 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.296772 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.310958 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.324386 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.347215 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.374550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.374611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.374622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.374643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.374654 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.376606 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.392952 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.477483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.477526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.477535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.477551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.477565 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.560812 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:22:34.963631629 +0000 UTC Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.580687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.580757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.580769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.580789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.580803 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.683037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.683080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.683091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.683110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.683123 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.786208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.786266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.786277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.786297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.786313 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.888890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.888937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.888947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.888963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.888976 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.917273 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" event={"ID":"7ac6d346-d2d2-4ad6-a72f-7506b709bea0","Type":"ContainerStarted","Data":"7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.917331 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" event={"ID":"7ac6d346-d2d2-4ad6-a72f-7506b709bea0","Type":"ContainerStarted","Data":"0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.917343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" event={"ID":"7ac6d346-d2d2-4ad6-a72f-7506b709bea0","Type":"ContainerStarted","Data":"bd659442c4f9d1341569cab66dc39cc95a3af50400c89a505fdd45fbec1a2cc9"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.919742 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/1.log" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.920393 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/0.log" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.923176 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc" exitCode=1 Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.923214 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc"} Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.923269 4805 scope.go:117] "RemoveContainer" containerID="56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.924202 4805 scope.go:117] "RemoveContainer" containerID="6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc" Feb 16 20:56:58 crc kubenswrapper[4805]: E0216 20:56:58.924482 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.936448 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.952328 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.966493 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.979243 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.991086 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:58Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.991194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.991265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.991283 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.991307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:58 crc kubenswrapper[4805]: I0216 20:56:58.991320 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:58Z","lastTransitionTime":"2026-02-16T20:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.002416 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.013113 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.024435 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.037001 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.047772 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.054776 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.065775 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.076207 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.088075 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.094280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.094313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.094328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.094349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.094363 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.105385 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:56:56.721372 6114 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:56.721409 6114 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:56.721449 6114 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 20:56:56.721505 6114 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 20:56:56.721456 6114 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 20:56:56.721550 6114 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:56.721614 6114 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:56.721624 6114 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:56.721642 6114 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:56.721661 6114 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:56.721670 6114 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:56.721714 6114 factory.go:656] Stopping watch factory\\\\nI0216 20:56:56.721714 6114 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:56.721750 6114 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:56.721754 6114 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.116931 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.129309 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.142364 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.159275 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.171638 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.186375 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.201888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.202100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.202120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.202142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.202156 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.202657 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.218143 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.229481 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.249478 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:56:56.721372 6114 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:56.721409 6114 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:56.721449 6114 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 20:56:56.721505 6114 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 20:56:56.721456 6114 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 20:56:56.721550 6114 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:56.721614 6114 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:56.721624 6114 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:56.721642 6114 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:56.721661 6114 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:56.721670 6114 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:56.721714 6114 factory.go:656] Stopping watch factory\\\\nI0216 20:56:56.721714 6114 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:56.721750 6114 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:56.721754 6114 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.263536 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.279989 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.292230 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.303798 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.305812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.305870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.305882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.305898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.305912 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.318432 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.353312 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-b6xdh"] Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.354417 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.354528 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.372184 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.387099 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.404256 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.408899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.408971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.408997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.409030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.409055 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.422195 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.432651 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.446577 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.463454 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.479318 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.479499 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.479467447 +0000 UTC m=+53.298150782 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.479780 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.479882 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphtb\" (UniqueName: \"kubernetes.io/projected/68747e4a-6576-44c3-b663-250315f6712f-kube-api-access-qphtb\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.482242 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.501284 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.511563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.511640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.511660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.511695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.511738 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.517199 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.529332 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.544200 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.553913 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.561402 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:28:41.38796468 +0000 UTC Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.565173 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.575574 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.580989 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.581516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qphtb\" (UniqueName: \"kubernetes.io/projected/68747e4a-6576-44c3-b663-250315f6712f-kube-api-access-qphtb\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.581561 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.581598 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.581689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.581737 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.581904 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.581972 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs podName:68747e4a-6576-44c3-b663-250315f6712f nodeName:}" failed. No retries permitted until 2026-02-16 20:57:00.081950541 +0000 UTC m=+37.900633836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs") pod "network-metrics-daemon-b6xdh" (UID: "68747e4a-6576-44c3-b663-250315f6712f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.582246 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.582289 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.582278469 +0000 UTC m=+53.400961764 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.582769 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.582807 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.582795754 +0000 UTC m=+53.401479049 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.582884 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.582899 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.582912 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.582938 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.582930858 +0000 UTC m=+53.401614153 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.582995 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.583009 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.583018 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.583048 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.58303878 +0000 UTC m=+53.401722075 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.595713 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:56:56.721372 6114 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:56.721409 6114 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:56.721449 6114 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 20:56:56.721505 6114 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 20:56:56.721456 6114 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 20:56:56.721550 6114 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:56.721614 6114 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:56.721624 6114 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:56.721642 6114 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:56.721661 6114 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:56.721670 6114 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:56.721714 6114 factory.go:656] Stopping watch factory\\\\nI0216 20:56:56.721714 6114 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:56.721750 6114 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:56.721754 6114 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.599206 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.599376 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.599471 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.599570 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.599607 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:56:59 crc kubenswrapper[4805]: E0216 20:56:59.599688 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.600475 4805 scope.go:117] "RemoveContainer" containerID="3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.605661 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qphtb\" (UniqueName: \"kubernetes.io/projected/68747e4a-6576-44c3-b663-250315f6712f-kube-api-access-qphtb\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.617593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.617632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.617644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.617662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.617676 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.722370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.722436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.722449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.722469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.722483 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.825573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.825625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.825641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.825663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.825678 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.928337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.928398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.928417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.928443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.928461 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:56:59Z","lastTransitionTime":"2026-02-16T20:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.930681 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/1.log" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.938628 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.946981 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00"} Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.947489 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.966629 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:56:59 crc kubenswrapper[4805]: I0216 20:56:59.981238 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:56:59Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.003858 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:56:56.721372 6114 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:56.721409 6114 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:56.721449 6114 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 20:56:56.721505 6114 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 20:56:56.721456 6114 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 20:56:56.721550 6114 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:56.721614 6114 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:56.721624 6114 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:56.721642 6114 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:56.721661 6114 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:56.721670 6114 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:56.721714 6114 factory.go:656] Stopping watch factory\\\\nI0216 20:56:56.721714 6114 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:56.721750 6114 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:56.721754 6114 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.020391 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.031124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.031176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.031190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.031212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.031229 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.036478 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.052335 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.064553 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.080561 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.087872 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:00 crc kubenswrapper[4805]: E0216 20:57:00.088085 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:00 crc kubenswrapper[4805]: E0216 20:57:00.088151 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs podName:68747e4a-6576-44c3-b663-250315f6712f nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.088129882 +0000 UTC m=+38.906813167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs") pod "network-metrics-daemon-b6xdh" (UID: "68747e4a-6576-44c3-b663-250315f6712f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.098083 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.114777 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.131706 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.133909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.133964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.133975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.133999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.134011 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.152917 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.173296 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.190159 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.208256 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.220983 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.236901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.236950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.236960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.236979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.236992 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.324967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.325280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.325369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.325504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.325593 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: E0216 20:57:00.342043 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.346994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.347054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.347068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.347090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.347105 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: E0216 20:57:00.361247 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.366310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.366370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.366384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.366412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.366426 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: E0216 20:57:00.382600 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.387686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.387912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.387970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.387999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.388015 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: E0216 20:57:00.401574 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.406777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.406973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.407088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.407207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.407338 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: E0216 20:57:00.420182 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:00Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:00 crc kubenswrapper[4805]: E0216 20:57:00.420331 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.422349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.422389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.422399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.422419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.422431 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.525643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.525685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.525694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.525709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.525744 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.562413 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:31:01.176657212 +0000 UTC Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.597008 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:00 crc kubenswrapper[4805]: E0216 20:57:00.597177 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.628699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.628772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.628789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.628812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.628825 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.731344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.731441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.731461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.731487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.731518 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.834930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.835032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.835051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.835077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.835093 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.939087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.939210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.939238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.939270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:00 crc kubenswrapper[4805]: I0216 20:57:00.939298 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:00Z","lastTransitionTime":"2026-02-16T20:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.042933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.043012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.043030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.043059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.043079 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.098869 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:01 crc kubenswrapper[4805]: E0216 20:57:01.099163 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:01 crc kubenswrapper[4805]: E0216 20:57:01.099777 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs podName:68747e4a-6576-44c3-b663-250315f6712f nodeName:}" failed. No retries permitted until 2026-02-16 20:57:03.099704924 +0000 UTC m=+40.918388259 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs") pod "network-metrics-daemon-b6xdh" (UID: "68747e4a-6576-44c3-b663-250315f6712f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.145584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.145666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.145678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.145729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.145743 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.249253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.249302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.249311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.249327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.249339 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.353196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.353256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.353290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.353308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.353318 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.456240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.456360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.456384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.456458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.456477 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.560764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.560818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.560828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.560849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.560860 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.562903 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:34:54.392803038 +0000 UTC Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.597959 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.597975 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:01 crc kubenswrapper[4805]: E0216 20:57:01.598978 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.598105 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:01 crc kubenswrapper[4805]: E0216 20:57:01.599237 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:01 crc kubenswrapper[4805]: E0216 20:57:01.599312 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.663312 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.663354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.663365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.663380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.663391 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.766709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.766792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.766805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.766832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.766853 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.870096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.870158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.870166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.870185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.870201 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.973263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.973313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.973326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.973342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:01 crc kubenswrapper[4805]: I0216 20:57:01.973355 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:01Z","lastTransitionTime":"2026-02-16T20:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.076502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.076557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.076568 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.076591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.076604 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.180208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.180315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.180335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.180363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.180387 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.283825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.283900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.283916 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.283933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.283945 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.386963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.387019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.387029 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.387049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.387060 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.489527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.489596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.489610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.489630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.489642 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.563192 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 16:54:15.733159434 +0000 UTC Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.593401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.593482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.593512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.593547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.593567 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.596868 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:02 crc kubenswrapper[4805]: E0216 20:57:02.597097 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.696140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.696203 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.696220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.696248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.696265 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.799358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.799410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.799423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.799443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.799461 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.904402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.904460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.904471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.904489 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:02 crc kubenswrapper[4805]: I0216 20:57:02.904501 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:02Z","lastTransitionTime":"2026-02-16T20:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.007275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.007343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.007365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.007394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.007417 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.110468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.110540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.110559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.110583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.110601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.132125 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:03 crc kubenswrapper[4805]: E0216 20:57:03.132282 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:03 crc kubenswrapper[4805]: E0216 20:57:03.132379 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs podName:68747e4a-6576-44c3-b663-250315f6712f nodeName:}" failed. No retries permitted until 2026-02-16 20:57:07.132351474 +0000 UTC m=+44.951034809 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs") pod "network-metrics-daemon-b6xdh" (UID: "68747e4a-6576-44c3-b663-250315f6712f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.214618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.214687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.214702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.214728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.214771 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.318018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.318061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.318071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.318085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.318094 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.421969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.422119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.422140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.422166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.422185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.524447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.524497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.524509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.524525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.524536 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.564194 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:57:41.205665357 +0000 UTC Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.597336 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.597502 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.597511 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:03 crc kubenswrapper[4805]: E0216 20:57:03.597827 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:03 crc kubenswrapper[4805]: E0216 20:57:03.598047 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:03 crc kubenswrapper[4805]: E0216 20:57:03.598184 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.618014 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.628453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.628501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.628513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.628539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.628553 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.632871 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.648976 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.664377 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.680960 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.695990 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.713104 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.725185 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.731103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.731156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.731167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.731183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.731194 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.754472 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c5c3cb997feb17b38e751d41baf3f4f82e076253fe8ef53c30e34dce1575cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:56Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 20:56:56.721372 6114 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:56.721409 6114 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:56.721449 6114 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 20:56:56.721505 6114 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 20:56:56.721456 6114 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 20:56:56.721550 6114 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 20:56:56.721614 6114 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 20:56:56.721624 6114 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:56.721642 6114 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:56.721661 6114 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:56.721670 6114 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:56.721714 6114 factory.go:656] Stopping watch factory\\\\nI0216 20:56:56.721714 6114 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:56.721750 6114 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:56.721754 6114 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.766374 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.784541 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.799443 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.816517 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.837700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.837825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.837852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.837886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.837906 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.847186 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.868748 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.890296 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.941886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.941946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.941959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.941981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:03 crc kubenswrapper[4805]: I0216 20:57:03.941993 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:03Z","lastTransitionTime":"2026-02-16T20:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.044304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.044348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.044358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.044374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.044386 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.147839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.147912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.147927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.147953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.147969 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.251841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.251892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.251908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.251931 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.251948 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.355043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.355114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.355132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.355154 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.355170 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.457860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.457907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.457918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.457935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.457957 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.561043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.561092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.561106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.561165 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.561277 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.565017 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:29:47.619679839 +0000 UTC Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.597055 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:04 crc kubenswrapper[4805]: E0216 20:57:04.597252 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.664830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.664910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.664932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.664964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.664986 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.768008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.768084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.768106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.768134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.768160 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.872213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.872289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.872310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.872337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.872354 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.975984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.976042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.976056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.976076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:04 crc kubenswrapper[4805]: I0216 20:57:04.976090 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:04Z","lastTransitionTime":"2026-02-16T20:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.078652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.078700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.078713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.078733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.078745 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.181351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.181427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.181446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.181483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.181503 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.285236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.285301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.285316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.285343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.285359 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.388461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.388508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.388519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.388538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.388550 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.492385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.492456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.492467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.492486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.492495 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.565313 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:29:08.222083567 +0000 UTC Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.595295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.595356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.595381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.595410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.595434 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.596872 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.596901 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.596901 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:05 crc kubenswrapper[4805]: E0216 20:57:05.597290 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:05 crc kubenswrapper[4805]: E0216 20:57:05.597086 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:05 crc kubenswrapper[4805]: E0216 20:57:05.597398 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.698604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.698697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.698723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.698779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.698807 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.802977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.803936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.803961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.803991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.804011 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.908227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.908310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.908329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.908359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:05 crc kubenswrapper[4805]: I0216 20:57:05.908386 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:05Z","lastTransitionTime":"2026-02-16T20:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.013325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.013403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.013415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.013437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.013451 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.116709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.116798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.116813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.116838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.116857 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.220644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.220715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.220770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.220799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.220819 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.324122 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.324188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.324209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.324236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.324256 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.427833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.427904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.427924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.427951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.427969 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.531709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.531794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.531809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.531831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.531845 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.566503 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:01:48.775541662 +0000 UTC Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.597348 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:06 crc kubenswrapper[4805]: E0216 20:57:06.597544 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.634991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.635298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.635366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.635433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.635513 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.738707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.738815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.738835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.738858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.738874 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.841589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.841974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.842025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.842054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.842076 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.945495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.945559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.945573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.945592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:06 crc kubenswrapper[4805]: I0216 20:57:06.945605 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:06Z","lastTransitionTime":"2026-02-16T20:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.047995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.048037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.048046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.048064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.048075 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.150426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.150527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.150551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.150583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.150606 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.203681 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:07 crc kubenswrapper[4805]: E0216 20:57:07.204036 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:07 crc kubenswrapper[4805]: E0216 20:57:07.204136 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs podName:68747e4a-6576-44c3-b663-250315f6712f nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.20411194 +0000 UTC m=+53.022795275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs") pod "network-metrics-daemon-b6xdh" (UID: "68747e4a-6576-44c3-b663-250315f6712f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.254358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.254419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.254431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.254455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.254469 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.357907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.357976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.357994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.358022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.358041 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.461254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.461317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.461337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.461362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.461379 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.564377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.564415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.564424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.564438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.564449 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.566764 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:01:02.224646861 +0000 UTC Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.597391 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.597470 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:07 crc kubenswrapper[4805]: E0216 20:57:07.597567 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:07 crc kubenswrapper[4805]: E0216 20:57:07.597648 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.597689 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:07 crc kubenswrapper[4805]: E0216 20:57:07.597802 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.667869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.667935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.667954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.667977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.667996 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.770628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.770691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.770705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.770730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.770774 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.873498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.873532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.873541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.873555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.873566 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.975517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.975566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.975577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.975593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:07 crc kubenswrapper[4805]: I0216 20:57:07.975605 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:07Z","lastTransitionTime":"2026-02-16T20:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.079286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.079341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.079350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.079366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.079377 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.183142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.183190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.183201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.183219 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.183229 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.288265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.288387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.289008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.289371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.289500 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.393909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.393990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.394013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.394043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.394068 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.497358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.497445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.497468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.497502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.497527 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.567275 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:54:41.766288966 +0000 UTC Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.597836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:08 crc kubenswrapper[4805]: E0216 20:57:08.597990 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.604461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.604524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.604548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.604580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.604607 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.629365 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.630497 4805 scope.go:117] "RemoveContainer" containerID="6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc" Feb 16 20:57:08 crc kubenswrapper[4805]: E0216 20:57:08.630771 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.651551 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.666813 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.682844 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.700386 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.707754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.707803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.707815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.707837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.707852 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:08Z","lastTransitionTime":"2026-02-16T20:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.723046 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.737472 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.754522 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.767965 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:08 crc kubenswrapper[4805]: I0216 20:57:08.780907 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.205416 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:08Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.206776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.206806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.206816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.206834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.206845 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.225233 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:09Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.235527 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:09Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.246458 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:09Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.257943 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:09Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.268933 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:09Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.277962 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:09Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.309263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.309293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.309303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.309318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.309328 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.413287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.413994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.414200 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.414396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.414599 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.518108 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.518189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.518214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.518245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.518269 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.568072 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:49:39.071297013 +0000 UTC Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.597107 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:09 crc kubenswrapper[4805]: E0216 20:57:09.597276 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.597363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:09 crc kubenswrapper[4805]: E0216 20:57:09.597446 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.597512 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:09 crc kubenswrapper[4805]: E0216 20:57:09.597586 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.622343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.622472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.622498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.622530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.622552 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.726322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.726402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.726419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.726446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.726466 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.829380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.829444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.829453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.829469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.829481 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.933252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.933297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.933305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.933323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:09 crc kubenswrapper[4805]: I0216 20:57:09.933334 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:09Z","lastTransitionTime":"2026-02-16T20:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.036181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.036229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.036243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.036258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.036268 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.139144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.139183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.139191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.139207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.139217 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.246711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.246797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.246811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.246831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.246843 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.350600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.350657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.350669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.350690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.350703 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.448575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.448649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.448674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.448703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.448757 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.455414 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 20:57:10 crc kubenswrapper[4805]: E0216 20:57:10.474096 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.474407 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.479105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.479147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.479163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.479183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.479198 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.492370 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: E0216 20:57:10.501908 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.507647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.507679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.507691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.507708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.507745 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.508681 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: E0216 20:57:10.523699 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.526265 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.528000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.528052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.528063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.528083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.528098 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.545266 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: E0216 20:57:10.549889 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.553706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.553776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.553789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.553809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.553820 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.564295 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: E0216 20:57:10.567781 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: E0216 20:57:10.567942 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.568378 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:58:54.542355341 +0000 UTC Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.569599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.569630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.569641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.569659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.569670 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.584524 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.597202 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:10 crc kubenswrapper[4805]: E0216 20:57:10.597376 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.598878 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.618436 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.632426 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.655067 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.670973 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.674530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.674571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.674584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.674604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.674617 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.694506 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.714586 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.732654 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.754096 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:10Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.777429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.777491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.777510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.777536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.777557 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.880544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.880588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.880597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.880611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.880621 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.983704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.983797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.983810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.983833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:10 crc kubenswrapper[4805]: I0216 20:57:10.983845 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:10Z","lastTransitionTime":"2026-02-16T20:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.086919 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.086972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.086986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.087005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.087018 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.189858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.189927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.189944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.189969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.189987 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.293487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.293546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.293559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.293576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.293588 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.396217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.396270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.396292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.396321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.396345 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.499615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.499674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.499691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.499713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.499766 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.569184 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:26:04.213982662 +0000 UTC Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.597016 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.597119 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:11 crc kubenswrapper[4805]: E0216 20:57:11.597230 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:11 crc kubenswrapper[4805]: E0216 20:57:11.597276 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.599524 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:11 crc kubenswrapper[4805]: E0216 20:57:11.599674 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.603107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.603172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.603189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.603212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.603234 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.706711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.706800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.706813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.706836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.706853 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.809992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.810050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.810071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.810093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.810109 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.913467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.913509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.913520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.913537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:11 crc kubenswrapper[4805]: I0216 20:57:11.913550 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:11Z","lastTransitionTime":"2026-02-16T20:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.016875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.016951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.016964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.016986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.017001 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.119892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.119926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.119935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.119949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.119958 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.223276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.223335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.223380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.223403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.223420 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.326708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.326827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.326851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.326880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.326901 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.431471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.431559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.431577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.431602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.431621 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.534983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.535045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.535054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.535070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.535081 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.569965 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:01:23.402504283 +0000 UTC Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.597487 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:12 crc kubenswrapper[4805]: E0216 20:57:12.597820 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.638204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.638275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.638290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.638310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.638349 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.741565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.741634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.741667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.741698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.741766 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.845707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.845790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.845802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.845825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.845837 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.950465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.950606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.950638 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.950663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:12 crc kubenswrapper[4805]: I0216 20:57:12.950681 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:12Z","lastTransitionTime":"2026-02-16T20:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.053454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.053516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.053528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.053543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.053552 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.156606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.156691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.156752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.156801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.156826 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.260841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.260967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.260993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.261024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.261045 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.364316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.364405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.364429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.364461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.364489 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.467196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.467255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.467269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.467289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.467303 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.570125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.570128 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:29:51.899809281 +0000 UTC Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.570175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.570231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.570268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.570293 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.597642 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.597742 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.597707 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:13 crc kubenswrapper[4805]: E0216 20:57:13.597867 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:13 crc kubenswrapper[4805]: E0216 20:57:13.598054 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:13 crc kubenswrapper[4805]: E0216 20:57:13.598195 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.631680 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.654301 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.667670 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.673473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.673540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.673556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.673581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.673599 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.683105 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.695751 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.710181 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.729531 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.747135 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.760293 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.776545 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.777359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.777406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.777419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.777439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.777454 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.790981 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.805394 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.822113 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.835110 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.852463 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.862520 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:13Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.879398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.879453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.879469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.879490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.879507 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.983103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.983145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.983157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.983174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:13 crc kubenswrapper[4805]: I0216 20:57:13.983186 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:13Z","lastTransitionTime":"2026-02-16T20:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.086277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.086318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.086330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.086348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.086361 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.189231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.189280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.189292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.189308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.189338 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.293130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.293197 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.293213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.293235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.293251 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.395886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.395933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.395946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.395965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.395981 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.499133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.499583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.499873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.500135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.500375 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.570565 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:19:01.385674086 +0000 UTC Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.597015 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:14 crc kubenswrapper[4805]: E0216 20:57:14.597235 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.603907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.604040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.604122 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.604212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.604246 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.709599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.709662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.709677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.709698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.709714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.812649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.812758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.812783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.812812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.812829 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.915506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.915578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.915602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.915633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:14 crc kubenswrapper[4805]: I0216 20:57:14.915654 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:14Z","lastTransitionTime":"2026-02-16T20:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.019070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.019121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.019141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.019161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.019175 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.122197 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.122280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.122300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.122378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.122404 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.226384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.226423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.226431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.226446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.226457 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.269659 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.269917 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.270032 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs podName:68747e4a-6576-44c3-b663-250315f6712f nodeName:}" failed. No retries permitted until 2026-02-16 20:57:31.27000704 +0000 UTC m=+69.088690405 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs") pod "network-metrics-daemon-b6xdh" (UID: "68747e4a-6576-44c3-b663-250315f6712f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.329349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.329391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.329400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.329415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.329424 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.432357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.432498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.432564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.432589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.432607 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.535503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.535563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.535575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.535597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.535610 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.571636 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:17:37.356432195 +0000 UTC Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.571937 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.572145 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:47.572122117 +0000 UTC m=+85.390805412 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.597124 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.597215 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.597303 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.597312 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.597391 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.597461 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.638101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.638175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.638187 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.638211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.638227 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.673913 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.674028 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.674167 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674174 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.674269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674307 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:47.674271992 +0000 UTC m=+85.492955477 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674386 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674439 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674460 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674524 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674563 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:47.674532749 +0000 UTC m=+85.493216184 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674585 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674608 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674614 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674765 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:47.674677794 +0000 UTC m=+85.493361129 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:57:15 crc kubenswrapper[4805]: E0216 20:57:15.674811 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:47.674794837 +0000 UTC m=+85.493478322 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.741843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.741883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.741892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.741909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.741923 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.845410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.845480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.845496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.845522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.845551 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.948604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.948765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.948789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.948819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:15 crc kubenswrapper[4805]: I0216 20:57:15.948845 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:15Z","lastTransitionTime":"2026-02-16T20:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.052251 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.052288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.052299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.052316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.052326 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.154947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.155034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.155046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.155060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.155070 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.258473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.258958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.259153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.259406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.259595 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.362935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.363386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.363527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.364005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.364204 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.466784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.466860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.466879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.466907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.466926 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.570043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.570311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.570335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.570368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.570390 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.571841 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:26:45.800748598 +0000 UTC Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.597545 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:16 crc kubenswrapper[4805]: E0216 20:57:16.597788 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.674042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.674089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.674103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.674124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.674143 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.777140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.777501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.777583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.777663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.777764 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.881570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.881675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.881697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.881763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.881788 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.984871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.985327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.985546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.985793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:16 crc kubenswrapper[4805]: I0216 20:57:16.986035 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:16Z","lastTransitionTime":"2026-02-16T20:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.089542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.089588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.089598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.089614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.089628 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.193190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.193259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.193278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.193308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.193329 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.296070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.296139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.296155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.296178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.296191 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.399102 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.399143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.399152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.399167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.399177 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.503140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.503200 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.503215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.503238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.503255 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.572368 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:39:38.633042838 +0000 UTC Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.597270 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.597276 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:17 crc kubenswrapper[4805]: E0216 20:57:17.597450 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.597500 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:17 crc kubenswrapper[4805]: E0216 20:57:17.597539 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:17 crc kubenswrapper[4805]: E0216 20:57:17.597703 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.607004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.607055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.607066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.607083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.607098 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.710504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.710602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.710624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.710651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.710673 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.816241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.816278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.816289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.816308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.816320 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.919182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.919259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.919280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.919306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:17 crc kubenswrapper[4805]: I0216 20:57:17.919324 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:17Z","lastTransitionTime":"2026-02-16T20:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.022460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.022542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.022565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.022594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.022615 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.046880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.064322 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.069501 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.090821 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.111888 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.125851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.125897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.125909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.125929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.125943 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.134126 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.156966 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.180986 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.198544 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.219477 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.228605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.228661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.228679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.228701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.228743 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.237917 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.248587 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.271577 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.286419 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.331663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.331720 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.331763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.331785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.331801 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.332482 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.353397 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.365944 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.379897 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:18Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.434674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.435040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.435196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.435325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.435445 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.537862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.537979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.538006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.538037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.538060 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.572825 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:12:22.738180731 +0000 UTC Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.597473 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:18 crc kubenswrapper[4805]: E0216 20:57:18.597647 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.640653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.640772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.640792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.640813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.640826 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.744454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.744497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.744505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.744524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.744539 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.847972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.848075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.848093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.848118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.848137 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.951554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.951610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.951620 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.951636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:18 crc kubenswrapper[4805]: I0216 20:57:18.951646 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:18Z","lastTransitionTime":"2026-02-16T20:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.054350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.054433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.054455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.054482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.054502 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.156926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.156985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.157004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.157449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.157697 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.260655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.260768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.260796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.260822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.260842 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.364147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.364216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.364235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.364260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.364273 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.467267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.467318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.467335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.467356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.467374 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.570104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.570147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.570156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.570171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.570184 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.573069 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:20:30.294621097 +0000 UTC Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.597062 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.597126 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.597178 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:19 crc kubenswrapper[4805]: E0216 20:57:19.597253 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:19 crc kubenswrapper[4805]: E0216 20:57:19.597378 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:19 crc kubenswrapper[4805]: E0216 20:57:19.597476 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.598985 4805 scope.go:117] "RemoveContainer" containerID="6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.672547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.672620 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.672643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.672672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.672696 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.776178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.776237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.776251 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.776272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.776287 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.878425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.878462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.878470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.878484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.878495 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.980454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.980502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.980513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.980530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:19 crc kubenswrapper[4805]: I0216 20:57:19.980540 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:19Z","lastTransitionTime":"2026-02-16T20:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.083133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.083537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.083546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.083560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.083570 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.185643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.185701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.185717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.185767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.185789 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.249359 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/1.log" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.252319 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.252867 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.267657 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.276756 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.288220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.288254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.288263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.288277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.288287 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.289665 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.298292 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.309068 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.323532 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.335678 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.355087 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.368697 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.380814 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.390806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.390855 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.390870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.390894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.390914 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.394890 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.405454 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.417545 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.429406 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.443684 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.456781 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.468022 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.494052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.494090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.494100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.494116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.494125 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.573743 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:45:00.411851152 +0000 UTC Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.596770 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:20 crc kubenswrapper[4805]: E0216 20:57:20.596954 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.598027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.598069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.598079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.598096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.598110 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.701653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.701760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.701788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.701820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.701844 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.804836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.804917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.804935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.804958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.804976 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.868627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.868677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.868693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.868718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.868771 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: E0216 20:57:20.886555 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.891289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.891367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.891386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.891412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.891430 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: E0216 20:57:20.909319 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.915147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.915212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.915227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.915252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.915270 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: E0216 20:57:20.931488 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.936101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.936150 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.936162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.936180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.936192 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: E0216 20:57:20.951293 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.954924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.954970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.954984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.955002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.955015 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:20 crc kubenswrapper[4805]: E0216 20:57:20.968849 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:20 crc kubenswrapper[4805]: E0216 20:57:20.969014 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.970705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.970778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.970797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.970813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:20 crc kubenswrapper[4805]: I0216 20:57:20.970827 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:20Z","lastTransitionTime":"2026-02-16T20:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.073509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.073552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.073563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.073580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.073626 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.177194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.177262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.177277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.177301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.177314 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.260769 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/2.log" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.261509 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/1.log" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.266617 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c" exitCode=1 Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.266671 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.266712 4805 scope.go:117] "RemoveContainer" containerID="6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.268521 4805 scope.go:117] "RemoveContainer" containerID="2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c" Feb 16 20:57:21 crc kubenswrapper[4805]: E0216 20:57:21.268892 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.280441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.280486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.280497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.280548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.280565 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.294127 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cf72816b278de023d6cfca8ba4742782125e039828abda07322e38a03b81efc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 20:56:58.816771 6303 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 20:56:58.816791 6303 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 20:56:58.816792 6303 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 20:56:58.816797 6303 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 20:56:58.816807 6303 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 20:56:58.816824 6303 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 20:56:58.816836 6303 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 20:56:58.816843 6303 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 20:56:58.816846 6303 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 20:56:58.816854 6303 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 20:56:58.816860 6303 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 20:56:58.816866 6303 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 20:56:58.816883 6303 factory.go:656] Stopping watch factory\\\\nI0216 20:56:58.816906 6303 ovnkube.go:599] Stopped ovnkube\\\\nI0216 20:56:58.816914 6303 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 20:56:58.816925 6303 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"ed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z]\\\\nI0216 20:57:20.549106 6580 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Router\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.308581 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.322139 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.334163 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.344825 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.356208 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.369806 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.383378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.383440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.383455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.383471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.383484 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.386813 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.402634 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.414262 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.427702 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.440491 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.452225 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.468272 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.480354 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.489577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.489623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.489633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.489649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.489659 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.504365 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.514876 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:21Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.574483 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:40:28.783349545 +0000 UTC Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.593002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.593064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.593088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.593117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.593135 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.597442 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:21 crc kubenswrapper[4805]: E0216 20:57:21.597569 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.597444 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.597443 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:21 crc kubenswrapper[4805]: E0216 20:57:21.597699 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:21 crc kubenswrapper[4805]: E0216 20:57:21.597888 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.697130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.697204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.697221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.697246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.697263 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.801152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.801302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.801383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.801413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.801435 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.905129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.905169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.905179 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.905194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:21 crc kubenswrapper[4805]: I0216 20:57:21.905204 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:21Z","lastTransitionTime":"2026-02-16T20:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.007694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.007771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.007783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.007796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.007806 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.110691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.110763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.110772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.110789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.110800 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.213611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.213679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.213697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.213754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.213780 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.274030 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/2.log" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.281495 4805 scope.go:117] "RemoveContainer" containerID="2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c" Feb 16 20:57:22 crc kubenswrapper[4805]: E0216 20:57:22.281835 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.297998 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.313631 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.316978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.317056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.317075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.317103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.317121 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.328058 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.348846 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.365762 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.382306 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.399882 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.414206 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.419678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.419772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.419782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.419797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.419808 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.428427 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.447309 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.461457 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.488221 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"ed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z]\\\\nI0216 20:57:20.549106 6580 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Router\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.502294 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.516881 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.522123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.522196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.522212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.522234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.522252 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.531652 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.542940 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.556586 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:22Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.574935 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:09:13.554641132 +0000 UTC Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.597441 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:22 crc kubenswrapper[4805]: E0216 20:57:22.597647 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.625117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.625220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.625240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.625265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.625282 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.728697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.728775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.728790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.728814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.728831 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.831911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.831958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.832292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.832323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.832337 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.936843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.937280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.937467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.937689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:22 crc kubenswrapper[4805]: I0216 20:57:22.937948 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:22Z","lastTransitionTime":"2026-02-16T20:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.041600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.041702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.041748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.041773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.041792 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.145072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.145119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.145129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.145144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.145153 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.247541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.247599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.247611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.247629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.247641 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.351876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.351968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.351987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.352013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.352031 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.456068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.456123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.456139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.456164 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.456182 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.559436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.559502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.559524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.559552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.559693 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.576007 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:52:44.900307277 +0000 UTC Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.597537 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.597680 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:23 crc kubenswrapper[4805]: E0216 20:57:23.597706 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.597550 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:23 crc kubenswrapper[4805]: E0216 20:57:23.600088 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:23 crc kubenswrapper[4805]: E0216 20:57:23.600230 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.618808 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.637919 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.655228 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.663381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.663514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.663541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.663580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.663603 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.674711 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.700756 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.720058 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.747821 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.760537 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.765621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.765672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.765687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.765708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.765751 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.774354 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.786473 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.804675 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.824405 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"ed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z]\\\\nI0216 20:57:20.549106 6580 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Router\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.839158 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.853032 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.865758 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.868987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.869071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.869088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.869150 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.869166 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.884643 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.894923 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:23Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.972455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.972495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.972504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.972520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:23 crc kubenswrapper[4805]: I0216 20:57:23.972532 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:23Z","lastTransitionTime":"2026-02-16T20:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.076300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.076404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.076423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.076456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.076480 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.179716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.179825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.179882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.179913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.179932 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.282880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.282962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.282984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.283020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.283040 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.387021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.387097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.387119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.387147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.387168 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.490338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.490399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.490422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.490449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.490471 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.576614 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:24:44.597073493 +0000 UTC Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.593572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.593663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.593817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.593851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.593868 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.597628 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:24 crc kubenswrapper[4805]: E0216 20:57:24.597901 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.696666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.696758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.696777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.696799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.696816 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.799665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.800279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.800324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.800345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.800359 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.905309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.905371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.905381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.905400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:24 crc kubenswrapper[4805]: I0216 20:57:24.905411 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:24Z","lastTransitionTime":"2026-02-16T20:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.007936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.008003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.008013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.008035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.008049 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.111233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.111302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.111325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.111358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.111382 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.215401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.215472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.215508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.215531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.215552 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.318947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.319012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.319036 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.319059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.319072 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.421883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.421948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.421962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.421981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.421996 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.524876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.524940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.524955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.524976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.524992 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.578483 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:33:28.390349135 +0000 UTC Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.596885 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.596953 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:25 crc kubenswrapper[4805]: E0216 20:57:25.597129 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:25 crc kubenswrapper[4805]: E0216 20:57:25.597258 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.597674 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:25 crc kubenswrapper[4805]: E0216 20:57:25.597872 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.628087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.628166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.628189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.628221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.628245 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.731342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.731399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.731416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.731441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.731460 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.834525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.834572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.834582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.834600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.834612 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.937694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.937820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.937831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.937849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:25 crc kubenswrapper[4805]: I0216 20:57:25.937860 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:25Z","lastTransitionTime":"2026-02-16T20:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.040343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.040417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.040445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.040476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.040799 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.143792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.143840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.143857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.143879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.143895 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.246907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.246965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.246976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.246995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.247008 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.349255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.349322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.349333 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.349352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.349363 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.452236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.452286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.452295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.452311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.452327 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.555322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.555403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.555428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.555506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.555535 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.578905 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:18:41.710039494 +0000 UTC Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.597645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:26 crc kubenswrapper[4805]: E0216 20:57:26.597842 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.658693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.658765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.658777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.658794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.658806 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.761577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.761633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.761648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.761667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.761682 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.864316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.864373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.864385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.864404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.864417 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.966551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.966631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.966661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.966691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:26 crc kubenswrapper[4805]: I0216 20:57:26.966714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:26Z","lastTransitionTime":"2026-02-16T20:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.069235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.069280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.069291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.069309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.069321 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.172631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.172711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.172830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.172860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.172883 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.276165 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.276226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.276247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.276275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.276296 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.379689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.379787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.379805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.379830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.379847 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.482459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.482814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.482824 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.482839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.482849 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.579773 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:50:31.842730756 +0000 UTC Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.585884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.585971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.585998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.586024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.586041 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.597500 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.597547 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.597620 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:27 crc kubenswrapper[4805]: E0216 20:57:27.597795 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:27 crc kubenswrapper[4805]: E0216 20:57:27.597910 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:27 crc kubenswrapper[4805]: E0216 20:57:27.597976 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.689788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.689836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.689852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.689870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.689883 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.793872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.793989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.794028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.794217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.794334 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.897666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.897708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.897742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.897761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:27 crc kubenswrapper[4805]: I0216 20:57:27.897773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:27Z","lastTransitionTime":"2026-02-16T20:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.000531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.000606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.000626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.000660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.000679 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.106912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.106971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.106983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.107003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.107016 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.209705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.209752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.209760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.209775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.209785 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.313207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.313248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.313259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.313276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.313288 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.416053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.416089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.416100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.416116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.416128 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.518945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.519000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.519012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.519031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.519043 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.580419 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 21:14:22.337388474 +0000 UTC Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.596795 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:28 crc kubenswrapper[4805]: E0216 20:57:28.596999 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.620778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.620821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.620832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.620849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.620861 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.722943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.722978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.722985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.722998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.723008 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.825692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.825751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.825763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.825779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.825790 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.928531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.928574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.928587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.928603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:28 crc kubenswrapper[4805]: I0216 20:57:28.928615 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:28Z","lastTransitionTime":"2026-02-16T20:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.030756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.030813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.030828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.030849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.030864 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.133379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.133428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.133440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.133456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.133468 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.235950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.235999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.236012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.236030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.236040 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.337985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.338033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.338046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.338063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.338075 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.439980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.440019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.440027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.440041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.440050 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.542360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.542446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.542460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.542476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.542488 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.581136 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:01:51.326134401 +0000 UTC Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.597609 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.597644 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.597755 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:29 crc kubenswrapper[4805]: E0216 20:57:29.597749 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:29 crc kubenswrapper[4805]: E0216 20:57:29.597797 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:29 crc kubenswrapper[4805]: E0216 20:57:29.597853 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.645180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.645223 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.645235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.645253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.645264 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.747875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.747944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.747957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.747976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.747987 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.850797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.850867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.850885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.850909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.850927 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.953810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.953873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.953883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.953900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:29 crc kubenswrapper[4805]: I0216 20:57:29.953911 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:29Z","lastTransitionTime":"2026-02-16T20:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.056321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.056364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.056373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.056390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.056398 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.158682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.158748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.158761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.158779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.158791 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.261028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.261061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.261071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.261083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.261095 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.363335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.363389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.363400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.363420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.363433 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.465947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.466021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.466035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.466054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.466091 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.569781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.569860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.569883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.569911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.569933 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.582338 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:19:49.058424011 +0000 UTC Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.597131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:30 crc kubenswrapper[4805]: E0216 20:57:30.597308 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.672327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.672382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.672391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.672404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.672413 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.775184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.775284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.775311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.775338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.775357 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.878439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.878488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.878497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.878511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.878521 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.980857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.981242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.981450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.981660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:30 crc kubenswrapper[4805]: I0216 20:57:30.981794 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:30Z","lastTransitionTime":"2026-02-16T20:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.084683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.084766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.084778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.084795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.084806 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.187281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.187338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.187357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.187378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.187391 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.213083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.213134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.213148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.213164 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.213175 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.226610 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:31Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.231367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.231436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.231451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.231510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.231534 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.247090 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:31Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.251811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.251879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.251888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.251907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.251918 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.268097 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:31Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.271688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.271750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.271759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.271776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.271787 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.283638 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:31Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.287890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.287932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.287942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.287956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.287966 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.298622 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:31Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.298753 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.300621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.300674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.300685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.300704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.300717 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.345148 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.345345 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.345414 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs podName:68747e4a-6576-44c3-b663-250315f6712f nodeName:}" failed. No retries permitted until 2026-02-16 20:58:03.345397326 +0000 UTC m=+101.164080621 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs") pod "network-metrics-daemon-b6xdh" (UID: "68747e4a-6576-44c3-b663-250315f6712f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.404292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.404340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.404354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.404374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.404387 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.506801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.506848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.506858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.506873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.506884 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.583402 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:36:24.224810364 +0000 UTC Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.597023 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.597103 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.597144 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.597172 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.597269 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:31 crc kubenswrapper[4805]: E0216 20:57:31.597402 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.609219 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.609246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.609255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.609267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.609276 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.711524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.711584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.711606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.711628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.711642 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.814228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.814289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.814301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.814322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.814336 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.917692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.917775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.917787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.917805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:31 crc kubenswrapper[4805]: I0216 20:57:31.917818 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:31Z","lastTransitionTime":"2026-02-16T20:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.020836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.020882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.020893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.020910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.020921 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.124265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.124345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.124359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.124380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.124393 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.227639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.227690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.227701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.227735 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.227749 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.330371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.330428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.330438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.330453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.330464 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.433515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.433568 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.433580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.433598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.433613 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.536560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.536622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.536635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.536655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.536669 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.584322 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:41:08.65098496 +0000 UTC Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.597703 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:32 crc kubenswrapper[4805]: E0216 20:57:32.597886 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.639133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.639181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.639193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.639211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.639227 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.742413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.742476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.742499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.742520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.742533 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.845250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.845290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.845298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.845313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.845322 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.948233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.948298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.948313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.948332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:32 crc kubenswrapper[4805]: I0216 20:57:32.948343 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:32Z","lastTransitionTime":"2026-02-16T20:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.050854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.050909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.050921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.050940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.050953 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.154808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.154840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.154848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.154860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.154870 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.257876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.257937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.257949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.257970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.257984 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.360781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.360838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.360851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.360876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.360890 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.463971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.464033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.464045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.464074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.464090 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.567308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.567376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.567394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.567419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.567441 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.584857 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:48:05.828307184 +0000 UTC Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.597331 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:33 crc kubenswrapper[4805]: E0216 20:57:33.597506 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.597618 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.597656 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:33 crc kubenswrapper[4805]: E0216 20:57:33.597851 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:33 crc kubenswrapper[4805]: E0216 20:57:33.597945 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.614970 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.635163 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.651071 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.664093 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.670839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.670878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.670887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.670929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.670941 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.675418 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.686613 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.704112 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"ed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z]\\\\nI0216 20:57:20.549106 6580 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Router\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.716228 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.730645 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.741301 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.753240 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.763407 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.773254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.773305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.773318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.773334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.773348 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.773455 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.788599 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.799651 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.810233 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.824872 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:33Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.876563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.876640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.876658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.876683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.876700 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.979938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.979984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.979999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.980016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:33 crc kubenswrapper[4805]: I0216 20:57:33.980028 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:33Z","lastTransitionTime":"2026-02-16T20:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.083025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.083107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.083131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.083160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.083184 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.186783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.186860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.186880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.186906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.186963 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.289741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.289787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.289799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.289818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.289830 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.392465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.392503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.392511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.392525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.392535 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.495560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.496070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.496190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.496317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.496420 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.585634 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:51:34.201588712 +0000 UTC Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.596944 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:34 crc kubenswrapper[4805]: E0216 20:57:34.597064 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.597758 4805 scope.go:117] "RemoveContainer" containerID="2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c" Feb 16 20:57:34 crc kubenswrapper[4805]: E0216 20:57:34.597967 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.598425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.598487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.598498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.598514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.598525 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.701630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.702497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.702688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.702877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.703786 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.806813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.806870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.806881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.806900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.806915 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.909437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.909565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.909602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.909642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:34 crc kubenswrapper[4805]: I0216 20:57:34.909669 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:34Z","lastTransitionTime":"2026-02-16T20:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.012520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.012943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.013026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.013107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.013185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.116839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.116881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.116891 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.116907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.116933 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.219816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.219881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.219893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.219909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.219922 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.322898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.322955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.322966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.322989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.323002 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.330611 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/0.log" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.330712 4805 generic.go:334] "Generic (PLEG): container finished" podID="7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2" containerID="cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a" exitCode=1 Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.330786 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qwfz" event={"ID":"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2","Type":"ContainerDied","Data":"cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.331333 4805 scope.go:117] "RemoveContainer" containerID="cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.344606 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.357902 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9\\\\n2026-02-16T20:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:50Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:50Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.368930 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.379210 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.391230 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.404751 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.418294 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.425988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.426029 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.426038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.426054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.426064 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.432422 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.448256 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.464787 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.479104 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.494523 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.508058 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.521713 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.528956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.528999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.529011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.529027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.529037 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.534584 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.551087 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"ed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z]\\\\nI0216 20:57:20.549106 6580 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Router\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.560842 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:35Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.586160 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:21:06.671112282 +0000 UTC Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.597697 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.597822 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.597844 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:35 crc kubenswrapper[4805]: E0216 20:57:35.597970 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:35 crc kubenswrapper[4805]: E0216 20:57:35.597860 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:35 crc kubenswrapper[4805]: E0216 20:57:35.598087 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.632111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.632394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.632482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.632576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.632661 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.743413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.743499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.743518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.743547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.743568 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.845439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.845488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.845500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.845517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.845528 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.948357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.948397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.948405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.948418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:35 crc kubenswrapper[4805]: I0216 20:57:35.948430 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:35Z","lastTransitionTime":"2026-02-16T20:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.051688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.051785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.051804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.051826 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.051843 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.154851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.154924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.154936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.154952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.154982 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.257998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.258060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.258073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.258091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.258104 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.338333 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/0.log" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.338410 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qwfz" event={"ID":"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2","Type":"ContainerStarted","Data":"ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.355869 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.360328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.360356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.360365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.360381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.360392 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.370936 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.388827 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.404068 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.416996 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.431492 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.450362 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.470952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.471031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.471058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.471089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.471111 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.473600 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"ed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z]\\\\nI0216 20:57:20.549106 6580 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Router\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.485098 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.495861 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.511354 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9\\\\n2026-02-16T20:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:50Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:50Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.522561 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.531869 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.541981 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.551772 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.563285 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.573838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.573867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.573877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.573894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.573905 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.578607 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:36Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.586708 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:55:02.711527844 +0000 UTC Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.597195 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:36 crc kubenswrapper[4805]: E0216 20:57:36.597439 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.608258 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.676608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.676668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.676682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.676700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.676715 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.779569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.779606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.779614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.779630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.779642 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.882362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.882420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.882429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.882445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.882456 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.985501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.985551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.985566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.985584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:36 crc kubenswrapper[4805]: I0216 20:57:36.985597 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:36Z","lastTransitionTime":"2026-02-16T20:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.089366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.089418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.089429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.089450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.089463 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.192932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.192987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.192997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.193014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.193025 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.295612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.295682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.295692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.295708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.295732 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.398572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.398625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.398637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.398657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.398670 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.501766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.501850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.501873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.501901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.501919 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.587085 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:25:38.800537553 +0000 UTC Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.597881 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.597998 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:37 crc kubenswrapper[4805]: E0216 20:57:37.598051 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.598084 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:37 crc kubenswrapper[4805]: E0216 20:57:37.598162 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:37 crc kubenswrapper[4805]: E0216 20:57:37.598293 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.604163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.604212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.604235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.604258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.604277 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.707950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.708010 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.708022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.708040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.708053 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.810639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.810710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.810741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.810759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.810773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.913625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.913702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.913716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.913755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:37 crc kubenswrapper[4805]: I0216 20:57:37.913767 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:37Z","lastTransitionTime":"2026-02-16T20:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.017364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.017415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.017425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.017443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.017453 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.120565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.120623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.120632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.120650 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.120659 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.222676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.222755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.222769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.222786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.222799 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.324913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.324960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.324972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.324989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.325002 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.427706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.427873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.427889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.427909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.427922 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.530381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.530462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.530480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.530510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.530531 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.587278 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:27:43.716863996 +0000 UTC Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.597600 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:38 crc kubenswrapper[4805]: E0216 20:57:38.597791 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.633538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.633598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.633616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.633643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.633666 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.736912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.736994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.737015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.737041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.737059 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.840265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.840310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.840322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.840338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.840382 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.943286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.943337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.943348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.943363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:38 crc kubenswrapper[4805]: I0216 20:57:38.943375 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:38Z","lastTransitionTime":"2026-02-16T20:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.046402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.046458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.046471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.046490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.046527 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.149267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.149315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.149327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.149344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.149356 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.252474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.252529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.252542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.252595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.252617 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.355579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.355646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.355663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.355687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.355700 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.458687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.458803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.458840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.458874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.458892 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.562471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.562548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.562565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.562586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.562603 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.588123 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:19:03.403606705 +0000 UTC Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.597927 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.598025 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.598055 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:39 crc kubenswrapper[4805]: E0216 20:57:39.598137 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:39 crc kubenswrapper[4805]: E0216 20:57:39.598327 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:39 crc kubenswrapper[4805]: E0216 20:57:39.598463 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.665916 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.665997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.666015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.666038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.666056 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.769278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.769359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.769378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.769404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.769418 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.872714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.872809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.872832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.872859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.872876 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.978452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.978516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.978534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.978557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:39 crc kubenswrapper[4805]: I0216 20:57:39.978576 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:39Z","lastTransitionTime":"2026-02-16T20:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.081564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.081626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.081645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.081668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.081686 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.185143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.185197 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.185211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.185231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.185244 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.288248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.288301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.288318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.288342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.288364 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.393272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.393338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.393399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.393426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.393444 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.496779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.496838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.496856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.496880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.496901 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.588792 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:53:04.038758078 +0000 UTC Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.597224 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:40 crc kubenswrapper[4805]: E0216 20:57:40.597399 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.599880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.599919 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.599931 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.599947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.599959 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.703254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.703341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.703365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.703394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.703414 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.806568 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.806623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.806635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.806654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.806666 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.909765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.909921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.909944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.909969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:40 crc kubenswrapper[4805]: I0216 20:57:40.910029 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:40Z","lastTransitionTime":"2026-02-16T20:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.013703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.013773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.013782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.013799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.013811 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.117412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.117478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.117494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.117519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.117536 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.220815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.220898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.220922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.220957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.220981 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.303269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.303350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.303371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.303403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.303429 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: E0216 20:57:41.324516 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.330210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.330279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.330303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.330331 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.330356 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: E0216 20:57:41.351558 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.357752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.357806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.357825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.357848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.357866 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: E0216 20:57:41.375473 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.380883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.380963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.380987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.381018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.381044 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: E0216 20:57:41.401393 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.406467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.406529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.406547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.406584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.406601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: E0216 20:57:41.425538 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:41Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:41 crc kubenswrapper[4805]: E0216 20:57:41.425646 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.427178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.427217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.427229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.427245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.427258 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.529882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.529933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.529944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.529962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.529976 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.589597 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:46:04.924812081 +0000 UTC Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.597164 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.597176 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.597370 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:41 crc kubenswrapper[4805]: E0216 20:57:41.597616 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:41 crc kubenswrapper[4805]: E0216 20:57:41.597850 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:41 crc kubenswrapper[4805]: E0216 20:57:41.597937 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.633191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.633308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.633331 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.633363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.633388 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.736777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.736872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.736896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.736927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.736950 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.840772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.840838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.840864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.840895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.840918 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.944120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.944187 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.944204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.944231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:41 crc kubenswrapper[4805]: I0216 20:57:41.944255 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:41Z","lastTransitionTime":"2026-02-16T20:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.048217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.048312 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.048335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.048366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.048391 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.153435 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.153519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.153543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.153573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.153599 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.257962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.258354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.258449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.258532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.258613 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.360831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.360872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.360886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.360908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.360920 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.464300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.464342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.464351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.464366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.464376 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.567963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.568037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.568054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.568077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.568093 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.591270 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:12:28.85188175 +0000 UTC Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.597666 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:42 crc kubenswrapper[4805]: E0216 20:57:42.598115 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.671235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.671287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.671299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.671314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.671326 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.774216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.774662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.774843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.774951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.775051 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.878437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.878503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.878521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.878545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.878562 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.982149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.982508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.982607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.982691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:42 crc kubenswrapper[4805]: I0216 20:57:42.982802 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:42Z","lastTransitionTime":"2026-02-16T20:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.086789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.086863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.086903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.086941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.086965 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.190293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.190360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.190383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.190413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.190435 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.293841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.293911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.293933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.293965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.293988 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.397240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.397281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.397292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.397311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.397326 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.500488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.500546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.500562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.500582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.500596 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.591983 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:19:19.097200043 +0000 UTC Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.597507 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.597541 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:43 crc kubenswrapper[4805]: E0216 20:57:43.597707 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.597837 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:43 crc kubenswrapper[4805]: E0216 20:57:43.598012 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:43 crc kubenswrapper[4805]: E0216 20:57:43.598241 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.604384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.604491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.604511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.604578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.604598 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.620630 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.635520 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.660748 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"ed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z]\\\\nI0216 20:57:20.549106 6580 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Router\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.679602 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.695119 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43225cd3-b4a5-4fd7-903e-cbeda10fb884\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046299ba36811947fdf82ca53f38854b76e08fd42cd0c4687988c59a2a286f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.707940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.707979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.707994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.708009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.708021 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.720156 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.743477 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9\\\\n2026-02-16T20:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:50Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:50Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.752704 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.763473 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.776063 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.793341 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.811478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.811522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.811533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.811551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.811564 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.811622 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.828280 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.846207 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.865290 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.882823 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.898676 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.912512 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:43Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.913457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.913488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.913497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.913511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:43 crc kubenswrapper[4805]: I0216 20:57:43.913522 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:43Z","lastTransitionTime":"2026-02-16T20:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.017213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.017274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.017290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.017314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.017331 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.120187 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.120263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.120287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.120318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.120343 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.223288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.223387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.223399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.223417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.223428 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.327090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.327202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.327228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.327262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.327288 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.431360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.431440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.431464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.431496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.431519 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.535370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.535441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.535465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.535495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.535520 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.593119 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:53:09.399896219 +0000 UTC Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.597638 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:44 crc kubenswrapper[4805]: E0216 20:57:44.597787 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.640305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.640376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.640404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.640434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.640460 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.742800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.742873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.742888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.742908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.742922 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.846090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.846132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.846140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.846156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.846167 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.949354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.949447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.949478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.949511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:44 crc kubenswrapper[4805]: I0216 20:57:44.949535 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:44Z","lastTransitionTime":"2026-02-16T20:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.053160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.053236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.053254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.053282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.053304 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.156918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.156980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.156998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.157023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.157045 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.260005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.260081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.260103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.260131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.260157 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.398169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.398243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.398261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.398287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.398309 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.501325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.501892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.501914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.501937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.501958 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.593836 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:35:52.940129075 +0000 UTC Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.597329 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:45 crc kubenswrapper[4805]: E0216 20:57:45.597563 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.597866 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:45 crc kubenswrapper[4805]: E0216 20:57:45.597939 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.598066 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:45 crc kubenswrapper[4805]: E0216 20:57:45.598305 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.605140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.605174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.605184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.605200 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.605214 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.708513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.708569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.708585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.708608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.708625 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.811284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.811339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.811357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.811377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.811395 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.914432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.914525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.914543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.914567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:45 crc kubenswrapper[4805]: I0216 20:57:45.914583 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:45Z","lastTransitionTime":"2026-02-16T20:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.017083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.017145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.017156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.017173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.017185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.120181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.120272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.120297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.120327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.120362 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.223765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.223848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.223868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.223893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.223912 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.327185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.327261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.327284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.327313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.327335 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.442716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.442777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.442788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.442805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.442818 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.546806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.546868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.546885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.546909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.546926 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.595016 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:59:06.728409243 +0000 UTC Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.597477 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:46 crc kubenswrapper[4805]: E0216 20:57:46.597898 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.598155 4805 scope.go:117] "RemoveContainer" containerID="2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.651393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.651447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.651456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.651473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.651486 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.754235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.754300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.754314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.754337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.754353 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.857239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.857285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.857296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.857312 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.857359 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.959567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.959618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.959633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.959655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:46 crc kubenswrapper[4805]: I0216 20:57:46.959671 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:46Z","lastTransitionTime":"2026-02-16T20:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.063205 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.063254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.063268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.063286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.063301 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.166041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.166086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.166095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.166111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.166122 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.268696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.268762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.268774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.268791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.268802 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.371520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.371567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.371579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.371594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.371604 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.431490 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/2.log" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.433617 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.434080 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.449807 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.463569 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.474618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.474670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.474685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.474705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.474742 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.476789 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.489123 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.504000 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.515234 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.532021 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.543499 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.578121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.578159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.578167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.578183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.578192 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.607533 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 16:28:22.515458472 +0000 UTC Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.607840 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.607918 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.607993 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.608064 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.608332 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.608431 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.617624 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"ed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z]\\\\nI0216 20:57:20.549106 6580 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Router\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.633101 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.638881 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.639073 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.63904042 +0000 UTC m=+149.457723735 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.643818 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43225cd3-b4a5-4fd7-903e-cbeda10fb884\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046299ba36811947fdf82ca53f38854b76e08fd42cd0c4687988c59a2a286f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.656262 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.670965 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.680708 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.681270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.681308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.681321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.681343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.681360 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.692679 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.704924 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.716382 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.728312 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9\\\\n2026-02-16T20:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:50Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:50Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:47Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.739800 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.739867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.739900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.739923 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740010 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740061 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740083 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740090 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.740066355 +0000 UTC m=+149.558749670 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740097 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740157 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.740137767 +0000 UTC m=+149.558821102 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740083 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740198 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.740190328 +0000 UTC m=+149.558873623 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740614 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740652 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740672 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:47 crc kubenswrapper[4805]: E0216 20:57:47.740938 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.740921458 +0000 UTC m=+149.559604803 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.783782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.783819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.783828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.783840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.783850 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.887612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.887698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.888087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.888154 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.888451 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.991775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.991842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.991859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.991886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:47 crc kubenswrapper[4805]: I0216 20:57:47.991904 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:47Z","lastTransitionTime":"2026-02-16T20:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.095645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.095712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.095770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.095802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.095820 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.199137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.199196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.199212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.199231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.199245 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.302415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.302486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.302500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.302525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.302547 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.406110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.406175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.406191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.406215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.406233 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.440499 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/3.log" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.441824 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/2.log" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.446517 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229" exitCode=1 Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.446577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.446626 4805 scope.go:117] "RemoveContainer" containerID="2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.448207 4805 scope.go:117] "RemoveContainer" containerID="344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229" Feb 16 20:57:48 crc kubenswrapper[4805]: E0216 20:57:48.448601 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.474334 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.493471 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.510082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.510153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.510176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.510201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.510219 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.513619 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9\\\\n2026-02-16T20:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:50Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:50Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.529588 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.546394 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.565999 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.586656 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.597085 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:48 crc kubenswrapper[4805]: E0216 20:57:48.597281 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.608425 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:46:59.867069417 +0000 UTC Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.609339 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.612699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.612767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.612779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.612798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.612811 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.625975 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.644877 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.659764 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.676657 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.692321 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.707518 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.716153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.716199 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.716211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.716227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.716238 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.722538 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43225cd3-b4a5-4fd7-903e-cbeda10fb884\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046299ba36811947fdf82ca53f38854b76e08fd42cd0c4687988c59a2a286f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.738077 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.751021 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.781126 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2923870b4e9ecc4a0c06fe33cd05e23140996053431b6bacb9f4b5eefd1f9a2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:20Z\\\",\\\"message\\\":\\\"ed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:20Z is after 2025-08-24T17:21:41Z]\\\\nI0216 20:57:20.549106 6580 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Router\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:47Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:57:47.621941 6997 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:57:47.621949 6997 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0216 20:57:47.621955 6997 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0216 20:57:47.621960 6997 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0216 20:57:47.621941 6997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:48Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.819898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.819958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.819981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.820013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.820037 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.922514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.922904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.922913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.922926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:48 crc kubenswrapper[4805]: I0216 20:57:48.922936 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:48Z","lastTransitionTime":"2026-02-16T20:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.026338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.026400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.026417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.026444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.026462 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.129392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.129449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.129464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.129482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.129494 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.232299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.232374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.232394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.232424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.232443 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.335786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.335837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.335860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.335884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.335902 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.439458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.439528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.439547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.439574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.439595 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.452296 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/3.log" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.457125 4805 scope.go:117] "RemoveContainer" containerID="344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229" Feb 16 20:57:49 crc kubenswrapper[4805]: E0216 20:57:49.457508 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.480171 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.492759 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.507594 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.519507 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.539958 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:47Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:57:47.621941 6997 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:57:47.621949 6997 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0216 20:57:47.621955 6997 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0216 20:57:47.621960 6997 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0216 20:57:47.621941 6997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.542914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.542957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.542967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.542987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.543001 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.553994 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.566344 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43225cd3-b4a5-4fd7-903e-cbeda10fb884\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046299ba36811947fdf82ca53f38854b76e08fd42cd0c4687988c59a2a286f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.583099 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.598162 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.598264 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:49 crc kubenswrapper[4805]: E0216 20:57:49.598350 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:49 crc kubenswrapper[4805]: E0216 20:57:49.598470 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.598172 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:49 crc kubenswrapper[4805]: E0216 20:57:49.598700 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.599871 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.608635 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:31:58.882072369 +0000 UTC Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.615306 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.626868 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.642842 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.646025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.646059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.646071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.646086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.646097 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.659805 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.674520 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9\\\\n2026-02-16T20:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:50Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:50Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.688070 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.705502 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.722235 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.737823 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:49Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.749400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.749437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.749452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.749472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.749487 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.852898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.852967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.853021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.853049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.853071 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.956255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.956364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.956390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.956417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:49 crc kubenswrapper[4805]: I0216 20:57:49.956443 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:49Z","lastTransitionTime":"2026-02-16T20:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.060211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.060286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.060328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.060365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.060403 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.164096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.164193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.164215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.164243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.164265 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.266960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.266996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.267008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.267031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.267043 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.370772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.370840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.370853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.370869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.370882 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.474103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.474148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.474160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.474179 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.474189 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.577401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.577459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.577475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.577504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.577521 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.597737 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:50 crc kubenswrapper[4805]: E0216 20:57:50.597885 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.608918 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:58:00.002131146 +0000 UTC Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.680925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.681062 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.681087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.681115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.681135 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.784066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.784116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.784128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.784151 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.784203 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.888397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.888474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.888491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.888516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.888534 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.991501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.991574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.991601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.991633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:50 crc kubenswrapper[4805]: I0216 20:57:50.991658 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:50Z","lastTransitionTime":"2026-02-16T20:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.099276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.099311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.099320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.099334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.099380 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.203069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.203200 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.203212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.203229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.203238 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.307655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.307707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.307764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.307787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.307801 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.411457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.411507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.411518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.411537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.411549 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.514436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.514492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.514505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.514527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.514541 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.597566 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.597633 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:51 crc kubenswrapper[4805]: E0216 20:57:51.597818 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.597863 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:51 crc kubenswrapper[4805]: E0216 20:57:51.598001 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:51 crc kubenswrapper[4805]: E0216 20:57:51.598258 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.610029 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:15:11.481897002 +0000 UTC Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.613385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.613465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.613485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.613507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.613525 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: E0216 20:57:51.633856 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.638337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.638374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.638382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.638397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.638407 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: E0216 20:57:51.657033 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.662358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.662417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.662433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.662455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.662473 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: E0216 20:57:51.684653 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.690245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.690286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.690301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.690319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.690334 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: E0216 20:57:51.709546 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.714761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.714795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.714808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.714824 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.714837 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: E0216 20:57:51.731084 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:51Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:51 crc kubenswrapper[4805]: E0216 20:57:51.731307 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.733876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.733939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.733951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.733971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.733983 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.837590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.837652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.837675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.837707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.837773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.940539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.940588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.940602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.940617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:51 crc kubenswrapper[4805]: I0216 20:57:51.940630 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:51Z","lastTransitionTime":"2026-02-16T20:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.044035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.044177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.044198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.044224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.044242 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.146626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.146679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.146691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.146706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.146763 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.249683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.249749 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.249757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.249772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.249782 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.353512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.353598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.353627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.353660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.353691 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.456710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.456786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.456799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.456816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.456828 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.560141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.560194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.560213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.560233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.560250 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.597403 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:52 crc kubenswrapper[4805]: E0216 20:57:52.597592 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.610594 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:42:24.503891237 +0000 UTC Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.664710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.664825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.664844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.664871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.664899 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.768354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.769161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.769192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.769218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.769235 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.871807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.872257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.872419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.872563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.872592 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.975517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.975572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.975584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.975602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:52 crc kubenswrapper[4805]: I0216 20:57:52.975615 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:52Z","lastTransitionTime":"2026-02-16T20:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.079053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.079134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.079156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.079181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.079199 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.182969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.183040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.183059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.183086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.183107 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.286709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.286839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.286854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.286875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.286887 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.389339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.389400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.389414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.389435 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.389450 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.492529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.492587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.492602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.492627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.492640 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.596379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.596439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.596452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.596471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.596483 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.596767 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:53 crc kubenswrapper[4805]: E0216 20:57:53.597151 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.597218 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.597181 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:53 crc kubenswrapper[4805]: E0216 20:57:53.597379 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:53 crc kubenswrapper[4805]: E0216 20:57:53.597526 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.611076 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:16:47.280731109 +0000 UTC Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.615556 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.634949 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.654709 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.670889 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.691279 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.699996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.700040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.700056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.700080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.700097 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.707513 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.737835 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.751668 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.782009 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:47Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:57:47.621941 6997 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:57:47.621949 6997 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0216 20:57:47.621955 6997 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0216 20:57:47.621960 6997 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0216 20:57:47.621941 6997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.794953 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.803896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.803943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.803958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.803978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.803992 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.806068 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43225cd3-b4a5-4fd7-903e-cbeda10fb884\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046299ba36811947fdf82ca53f38854b76e08fd42cd0c4687988c59a2a286f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.819937 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.831609 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.844464 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.861416 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.879173 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.892385 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.907579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.907645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.907657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.907682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.907694 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:53Z","lastTransitionTime":"2026-02-16T20:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:53 crc kubenswrapper[4805]: I0216 20:57:53.907851 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9\\\\n2026-02-16T20:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:50Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:50Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:57:53Z is after 2025-08-24T17:21:41Z" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.011290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.011386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.011413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.011469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.011495 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.114984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.115050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.115069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.115096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.115116 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.218617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.218665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.218679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.218700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.218715 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.321366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.321420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.321456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.321487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.321507 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.424091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.424157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.424177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.424204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.424223 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.528204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.528277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.528295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.528321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.528339 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.597218 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:54 crc kubenswrapper[4805]: E0216 20:57:54.597484 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.611680 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:35:53.970559796 +0000 UTC Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.631481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.631542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.631567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.631624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.631660 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.736044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.736138 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.736162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.736189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.736210 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.839543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.839615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.839632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.839653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.839671 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.942304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.942363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.942382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.942403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:54 crc kubenswrapper[4805]: I0216 20:57:54.942421 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:54Z","lastTransitionTime":"2026-02-16T20:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.045397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.045446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.045462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.045483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.045500 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.148682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.148813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.148853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.148882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.148905 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.251616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.251677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.251808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.251855 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.251872 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.355566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.355642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.355660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.355684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.355702 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.459570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.459656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.459679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.459711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.459773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.563349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.563411 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.563427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.563450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.563471 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.597068 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:55 crc kubenswrapper[4805]: E0216 20:57:55.597255 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.597097 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.597334 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:55 crc kubenswrapper[4805]: E0216 20:57:55.597504 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:55 crc kubenswrapper[4805]: E0216 20:57:55.597619 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.612148 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:17:07.758255797 +0000 UTC Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.666817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.666872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.666890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.666916 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.666934 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.770709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.770791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.770808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.770833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.770850 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.874528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.874595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.874612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.874636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.874654 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.978328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.978404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.978427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.978453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:55 crc kubenswrapper[4805]: I0216 20:57:55.978474 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:55Z","lastTransitionTime":"2026-02-16T20:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.086144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.086243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.086268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.086300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.086335 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.189514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.189591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.189612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.189641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.189663 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.293819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.293936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.293958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.293982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.293999 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.398619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.398686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.398717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.398779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.398803 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.501448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.501514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.501526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.501544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.501924 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.597497 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:56 crc kubenswrapper[4805]: E0216 20:57:56.597672 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.607831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.607886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.607899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.607916 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.607931 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.613131 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:05:50.442493076 +0000 UTC Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.711393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.711450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.711468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.711491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.711507 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.814416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.814460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.814470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.814486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.814498 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.917400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.917460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.917480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.917502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:56 crc kubenswrapper[4805]: I0216 20:57:56.917518 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:56Z","lastTransitionTime":"2026-02-16T20:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.020823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.020871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.020882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.020899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.020911 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.123769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.123862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.123885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.123906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.123920 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.227276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.227332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.227352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.227372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.227383 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.329591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.329831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.329849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.329868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.329880 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.433432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.433492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.433508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.433535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.433552 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.536988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.537061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.537083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.537111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.537133 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.597433 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:57 crc kubenswrapper[4805]: E0216 20:57:57.598229 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.597528 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:57 crc kubenswrapper[4805]: E0216 20:57:57.598811 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.597470 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:57 crc kubenswrapper[4805]: E0216 20:57:57.599399 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.613335 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:45:31.170551071 +0000 UTC Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.640330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.640881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.641062 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.641275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.641425 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.744589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.744650 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.744659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.744673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.744685 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.847917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.847964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.847974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.847992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.848003 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.951123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.951145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.951153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.951168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:57 crc kubenswrapper[4805]: I0216 20:57:57.951177 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:57Z","lastTransitionTime":"2026-02-16T20:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.053672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.053802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.053848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.053867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.053876 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.156956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.156998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.157009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.157025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.157038 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.260532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.260628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.260650 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.260680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.260701 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.364834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.364963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.364982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.365010 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.365035 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.467770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.467833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.467849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.467873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.467890 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.570274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.570329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.570346 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.570369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.570385 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.597553 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:57:58 crc kubenswrapper[4805]: E0216 20:57:58.597834 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.613795 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:05:20.417317336 +0000 UTC Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.672812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.672882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.672898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.672921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.672939 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.775888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.775963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.775991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.776017 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.776044 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.878475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.878510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.878519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.878532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.878540 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.981369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.981427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.981438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.981457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:58 crc kubenswrapper[4805]: I0216 20:57:58.981469 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:58Z","lastTransitionTime":"2026-02-16T20:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.084391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.084476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.084496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.084524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.084541 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.186821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.186867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.186876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.186893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.186905 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.289642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.289768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.289800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.289830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.289853 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.393308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.393410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.393433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.393467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.393489 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.495733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.495788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.495800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.495818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.495831 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.596955 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.597054 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:57:59 crc kubenswrapper[4805]: E0216 20:57:59.597086 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.597164 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:57:59 crc kubenswrapper[4805]: E0216 20:57:59.597267 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:57:59 crc kubenswrapper[4805]: E0216 20:57:59.597398 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.599160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.599199 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.599209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.599225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.599237 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.614029 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 06:38:56.633392133 +0000 UTC Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.701832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.701898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.701910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.701926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.701937 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.806431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.806555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.806569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.806589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.806604 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.910370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.910416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.910426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.910448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:57:59 crc kubenswrapper[4805]: I0216 20:57:59.910459 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:57:59Z","lastTransitionTime":"2026-02-16T20:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.013690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.013788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.013806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.013848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.013865 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.117203 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.117665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.117774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.117901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.117996 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.221085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.221149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.221163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.221182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.221194 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.324357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.324869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.324940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.325045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.325121 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.429779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.429871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.429903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.429938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.430022 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.533269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.533342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.533362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.533397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.533421 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.597603 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:00 crc kubenswrapper[4805]: E0216 20:58:00.597874 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.614818 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:10:06.127383824 +0000 UTC Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.636484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.636531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.636543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.636560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.636573 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.739896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.739943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.739961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.739986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.739998 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.842977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.843030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.843039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.843058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.843070 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.946246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.946306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.946323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.946347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:00 crc kubenswrapper[4805]: I0216 20:58:00.946396 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:00Z","lastTransitionTime":"2026-02-16T20:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.049505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.049601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.049634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.049665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.049686 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.153125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.153175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.153186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.153202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.153213 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.256020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.256095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.256119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.256150 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.256174 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.359401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.359483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.359500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.359528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.359547 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.462995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.463071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.463094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.463126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.463150 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.566206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.566269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.566282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.566298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.566324 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.597817 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.597962 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:01 crc kubenswrapper[4805]: E0216 20:58:01.598091 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:01 crc kubenswrapper[4805]: E0216 20:58:01.598125 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.597920 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:01 crc kubenswrapper[4805]: E0216 20:58:01.598543 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.615349 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:55:40.911553495 +0000 UTC Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.670106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.670241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.670255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.670277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.670289 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.773313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.773374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.773391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.773423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.773439 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.876445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.876910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.876921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.876940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.876952 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.980205 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.980286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.980309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.980906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.980961 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.995315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.995374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.995389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.995892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:01 crc kubenswrapper[4805]: I0216 20:58:01.996020 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:01Z","lastTransitionTime":"2026-02-16T20:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: E0216 20:58:02.013058 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.017086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.017129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.017140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.017158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.017170 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: E0216 20:58:02.030283 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.035548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.035587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.035599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.035616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.035629 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: E0216 20:58:02.052488 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.056679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.056742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.056759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.056779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.056795 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: E0216 20:58:02.070417 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.073807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.073848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.073862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.073881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.073894 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: E0216 20:58:02.089949 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:02Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:02 crc kubenswrapper[4805]: E0216 20:58:02.090126 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.092114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.092175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.092191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.092212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.092226 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.196307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.196386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.196405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.196436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.196457 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.300105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.300165 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.300184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.300206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.300226 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.404059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.404110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.404121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.404137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.404147 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.505891 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.505943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.505953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.505968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.505980 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.597108 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:02 crc kubenswrapper[4805]: E0216 20:58:02.597294 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.608470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.608517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.608529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.608547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.608559 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.615828 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:58:14.239117165 +0000 UTC Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.711521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.711582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.711594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.711616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.711637 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.815058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.815104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.815113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.815132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.815146 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.918406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.918455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.918467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.918480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:02 crc kubenswrapper[4805]: I0216 20:58:02.918491 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:02Z","lastTransitionTime":"2026-02-16T20:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.021851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.021899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.021912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.021929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.021942 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.125120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.125163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.125174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.125189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.125200 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.228043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.228097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.228111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.228134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.228153 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.333936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.334311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.334382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.334459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.334588 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.426091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:03 crc kubenswrapper[4805]: E0216 20:58:03.426342 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:58:03 crc kubenswrapper[4805]: E0216 20:58:03.426421 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs podName:68747e4a-6576-44c3-b663-250315f6712f nodeName:}" failed. No retries permitted until 2026-02-16 20:59:07.426397658 +0000 UTC m=+165.245080973 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs") pod "network-metrics-daemon-b6xdh" (UID: "68747e4a-6576-44c3-b663-250315f6712f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.438818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.438877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.438895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.438920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.438941 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.541642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.542188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.542359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.542516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.542661 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.597924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.597966 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.597935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:03 crc kubenswrapper[4805]: E0216 20:58:03.598137 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:03 crc kubenswrapper[4805]: E0216 20:58:03.598297 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:03 crc kubenswrapper[4805]: E0216 20:58:03.598453 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.616011 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:21:04.851547809 +0000 UTC Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.622353 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854331c19af16ef1277ee61c051b4dbe412d96edf6ca9449e058d198275a50fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.643948 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.646322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.646389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.646412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.646447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.646472 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.665500 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8qwfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:35Z\\\",\\\"message\\\":\\\"2026-02-16T20:56:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9\\\\n2026-02-16T20:56:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2772fa92-ebb9-4af5-9e06-379078f6d6c9 to /host/opt/cni/bin/\\\\n2026-02-16T20:56:50Z [verbose] multus-daemon started\\\\n2026-02-16T20:56:50Z [verbose] Readiness Indicator file check\\\\n2026-02-16T20:57:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhj6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8qwfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.681914 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c5pjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14b20786-6d22-491c-9054-ae32a4f25efd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5f4704154eeee905d5575168e5af0049d486740d78fc3cd34171735a4d0feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfhzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c5pjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.699806 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ac6d346-d2d2-4ad6-a72f-7506b709bea0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f46905478593c4519251e8e0fa9abb345b0abccb6c15321510f8dbd8c64a419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bc04b7637a7f9231013ac473ef0162ae93ad10a6639915d128538576827f971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pln6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2qdfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.720583 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3bbdc28-1c48-4c0c-9eea-1d52fe0af052\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb19ec2c96c4d3860f8920e6055828470cafb4bb558309ac50e6594530e7a8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c41f3f4be6b2c8f5fc77f8954a9a9f193596c94b8bef6a552ffa017226188c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3197dc5fc7536caf82ff52bec9e45cb4ecc80351381fa62fce8e1e15d345d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.741309 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.749176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.749220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.749235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.749255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.749270 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.762632 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.779164 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ef60407b016cdd7e67563c3290135dab62081b92c21a94d6c585e333030f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331a20a5b87c8f104080ee8a4bff188e80fa4251cc6d7674cc1ea62a64b803c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.795373 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e75ed224-e9fe-421a-9fda-36c7b5dc70f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 20:56:44.318671 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 20:56:44.318997 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 20:56:44.320797 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3187751623/tls.crt::/tmp/serving-cert-3187751623/tls.key\\\\\\\"\\\\nI0216 20:56:44.550382 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 20:56:44.557295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 20:56:44.557322 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 20:56:44.557352 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 20:56:44.557357 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 20:56:44.562569 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 20:56:44.562585 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 20:56:44.562599 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562604 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 20:56:44.562609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 20:56:44.562613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 20:56:44.562616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 20:56:44.562619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 20:56:44.564494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.804594 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-48h2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368e42ff-95cf-460e-84c6-ae9aeb3f8657\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b0f9ba75c68504f82f2fc5e8287ba6f42e9e3688fb01f9e510087a1a416b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbwm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-48h2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.818782 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8eef9cf-fd62-4c34-b4d1-2e1242bd437a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec53e777ff7d659db4c981e40810d43191884be7da4bdc91c33ee6222b6122d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d43bbd4701628cb260a5733c598d3d0e78d756224d25d0223cf8babbaf62d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faa89e88c5e8ef40a43cc3c80ef0a65378d83988f25256a15ab49d08aa2d93f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06b0a1cdf04b26c611a1effaf14985f29b600dd6fb5583b813b57780cb6bd31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947a94578bd2881cd89f37b79d378e1fc7afc8b26c86d10c03e84fb4f611ac3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d7c8655d20c28c24bf54a553c832924eb097dbad2488fadaa342179b6eb068\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b293a6f9de971eb17a4c6f39eb8f8a898c22dda83f768f9af93e9b392fe143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5f4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wmh7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.828789 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68747e4a-6576-44c3-b663-250315f6712f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b6xdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.841287 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ae3052-7dd1-4860-8446-76171676eb7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5597f03c84235ce7a25c3e39e954eabad6420d324e249c571071684ab9c7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b4e7a5df092ddf14bf22f20d4f34dd75b3a55496bd64a2a92df1cce3486782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0007ea2a503b6c8f06266f831327c0999d56add6259235f2593c545cefb73afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea206b796eb944b5832db0f8bf5a0bd8db79ad9a77abda014aff3be6b1e1a8b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.852103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.852144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.852154 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.852174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.852186 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.852532 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43225cd3-b4a5-4fd7-903e-cbeda10fb884\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046299ba36811947fdf82ca53f38854b76e08fd42cd0c4687988c59a2a286f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29be5c712c1fafa16b4b21631294d4d7f1442cbd344b49bfa1d91c2aba0308dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.863323 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3be44e4a8339e472340c03f98dce168e1ce12483bc4f877827523b217aa67257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.874525 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00c308fa-9d36-4fec-8717-6dbbe57523c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b597e150711d391cc6ff3ac126a083804c5e578dc16b801706d03edbbb4145f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff5kh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq8qd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.891856 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8719b45e-eed5-4265-87de-46967022148f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T20:56:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T20:57:47Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:57:47.621941 6997 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 20:57:47.621949 6997 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0216 20:57:47.621955 6997 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0216 20:57:47.621960 6997 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0216 20:57:47.621941 6997 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T20:57:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T20:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T20:56:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T20:56:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6stvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T20:56:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-crk96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:03Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.955169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.955227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.955243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.955265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:03 crc kubenswrapper[4805]: I0216 20:58:03.955280 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:03Z","lastTransitionTime":"2026-02-16T20:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.058531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.058579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.058593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.058608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.058620 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.161084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.161128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.161138 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.161153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.161164 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.263823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.264177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.264255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.264315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.264393 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.367186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.367227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.367256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.367271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.367282 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.470306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.470379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.470396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.470420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.470438 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.573358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.573405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.573413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.573428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.573438 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.596945 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:04 crc kubenswrapper[4805]: E0216 20:58:04.597826 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.598706 4805 scope.go:117] "RemoveContainer" containerID="344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229" Feb 16 20:58:04 crc kubenswrapper[4805]: E0216 20:58:04.599028 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.617784 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:18:47.320812055 +0000 UTC Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.676643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.676717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.676777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.676811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.676834 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.780659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.780848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.780879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.780909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.780932 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.883545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.883593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.883611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.883633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.883649 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.986468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.986566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.986581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.986628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:04 crc kubenswrapper[4805]: I0216 20:58:04.986644 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:04Z","lastTransitionTime":"2026-02-16T20:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.090205 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.090267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.090284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.090309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.090329 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.194266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.194440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.194472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.194505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.194529 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.297545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.297591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.297600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.297615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.297626 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.400300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.400356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.400367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.400385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.400397 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.503080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.503119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.503128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.503144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.503154 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.596896 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.596995 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:05 crc kubenswrapper[4805]: E0216 20:58:05.597110 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:05 crc kubenswrapper[4805]: E0216 20:58:05.597215 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.596995 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:05 crc kubenswrapper[4805]: E0216 20:58:05.597377 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.605569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.605660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.605687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.605769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.605789 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.618787 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:56:45.702147839 +0000 UTC Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.710180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.710226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.710243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.710268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.710286 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.815220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.815269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.815278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.815294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.815306 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.919497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.919573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.919587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.919609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:05 crc kubenswrapper[4805]: I0216 20:58:05.919626 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:05Z","lastTransitionTime":"2026-02-16T20:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.023193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.023282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.023305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.023335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.023359 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.126677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.126775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.126788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.126809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.126820 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.229845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.229906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.229920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.229941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.229955 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.333320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.333391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.333408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.333436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.333455 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.437429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.437482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.437493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.437516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.437531 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.540605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.540649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.540662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.540679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.540689 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.597902 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:06 crc kubenswrapper[4805]: E0216 20:58:06.598196 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.619246 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:55:08.44764852 +0000 UTC Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.643669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.643836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.643865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.643896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.643922 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.747420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.747465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.747480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.747496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.747508 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.851401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.851463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.851481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.851508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.851535 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.954980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.955067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.955092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.955121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:06 crc kubenswrapper[4805]: I0216 20:58:06.955142 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:06Z","lastTransitionTime":"2026-02-16T20:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.058402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.058480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.058499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.058525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.058551 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.161839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.161920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.161939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.161965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.161985 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.264291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.264343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.264354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.264377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.264388 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.367733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.367784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.367799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.367820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.367832 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.470933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.470998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.471014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.471041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.471064 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.574585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.574638 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.574655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.574678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.574693 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.597426 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.597493 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:07 crc kubenswrapper[4805]: E0216 20:58:07.597587 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.597491 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:07 crc kubenswrapper[4805]: E0216 20:58:07.597780 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:07 crc kubenswrapper[4805]: E0216 20:58:07.597800 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.620837 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:46:47.184445063 +0000 UTC Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.678041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.678108 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.678125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.678150 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.678169 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.781668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.781756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.781766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.781784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.781794 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.885255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.885313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.885329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.885345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.885356 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.988835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.988881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.988890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.988908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:07 crc kubenswrapper[4805]: I0216 20:58:07.988920 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:07Z","lastTransitionTime":"2026-02-16T20:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.092048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.092097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.092108 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.092125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.092136 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.195583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.195683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.195821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.195853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.195875 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.298525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.298570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.298582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.298599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.298612 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.401460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.401526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.401544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.401564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.401581 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.505317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.505385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.505404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.505429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.505447 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.597153 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:08 crc kubenswrapper[4805]: E0216 20:58:08.597339 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.608492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.608553 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.608565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.608581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.608594 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.621445 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:58:18.302545882 +0000 UTC Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.711202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.711249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.711262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.711280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.711293 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.815440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.815501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.815513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.815529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.815538 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.918418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.918455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.918466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.918479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:08 crc kubenswrapper[4805]: I0216 20:58:08.918489 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:08Z","lastTransitionTime":"2026-02-16T20:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.021340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.021394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.021404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.021421 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.021435 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.124257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.124304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.124314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.124330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.124343 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.228948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.229043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.229064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.229143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.229164 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.332162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.332223 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.332240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.332265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.332283 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.435920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.435977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.435987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.436006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.436020 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.539524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.539595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.539611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.539636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.539657 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.597432 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.597493 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.597438 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:09 crc kubenswrapper[4805]: E0216 20:58:09.597659 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:09 crc kubenswrapper[4805]: E0216 20:58:09.597985 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:09 crc kubenswrapper[4805]: E0216 20:58:09.598932 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.621699 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:14:25.448866445 +0000 UTC Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.625855 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.642358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.642406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.642421 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.642442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.642457 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.746281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.746343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.746362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.746385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.746402 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.849871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.849942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.849959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.849984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.850002 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.952950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.953072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.953098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.953129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:09 crc kubenswrapper[4805]: I0216 20:58:09.953153 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:09Z","lastTransitionTime":"2026-02-16T20:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.056951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.057022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.057040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.057070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.057094 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.159823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.159861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.159871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.159886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.159896 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.262800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.262849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.262861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.262878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.262889 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.365444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.365504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.365515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.365536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.365552 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.470813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.470916 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.470942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.470975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.471000 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.748710 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:09:49.165209857 +0000 UTC Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.748825 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:10 crc kubenswrapper[4805]: E0216 20:58:10.749114 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.751112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.751167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.751180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.751200 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.751211 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.752548 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:10 crc kubenswrapper[4805]: E0216 20:58:10.752686 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.854677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.854740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.854749 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.854767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.854777 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.958116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.958174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.958185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.958207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:10 crc kubenswrapper[4805]: I0216 20:58:10.958218 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:10Z","lastTransitionTime":"2026-02-16T20:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.060324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.060400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.060412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.060434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.060446 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.162826 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.162899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.162919 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.162944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.162961 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.266239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.266316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.266329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.266344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.266355 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.368978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.369045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.369064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.369270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.369290 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.472523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.472565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.472573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.472590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.472600 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.575288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.575335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.575349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.575371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.575390 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.597705 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.597818 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:11 crc kubenswrapper[4805]: E0216 20:58:11.597910 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:11 crc kubenswrapper[4805]: E0216 20:58:11.598157 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.679033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.679090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.679101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.679123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.679141 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.749310 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:10:58.776566362 +0000 UTC Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.782203 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.782282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.782301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.782332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.782352 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.884807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.884876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.884896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.884921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.884938 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.987553 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.987609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.987633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.987663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:11 crc kubenswrapper[4805]: I0216 20:58:11.987687 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:11Z","lastTransitionTime":"2026-02-16T20:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.090978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.091051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.091076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.091289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.091315 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.205841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.206499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.206521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.206548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.206568 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.310263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.310338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.310356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.310393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.310412 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.365085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.365173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.365197 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.365227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.365248 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: E0216 20:58:12.387518 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.393359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.393439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.393463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.393498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.393542 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: E0216 20:58:12.416055 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.428342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.428416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.428437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.428462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.428540 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: E0216 20:58:12.451233 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.456336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.456410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.456431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.456459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.456476 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: E0216 20:58:12.477086 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.483171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.483234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.483252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.483279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.483296 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: E0216 20:58:12.502254 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T20:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96338809-94a9-435f-a493-fbf04d8ca44c\\\",\\\"systemUUID\\\":\\\"f0e28e09-8311-445d-80ef-c735d31fd21e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T20:58:12Z is after 2025-08-24T17:21:41Z" Feb 16 20:58:12 crc kubenswrapper[4805]: E0216 20:58:12.502482 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.504763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.504817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.504828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.504849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.504861 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.597841 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.597857 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:12 crc kubenswrapper[4805]: E0216 20:58:12.598060 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:12 crc kubenswrapper[4805]: E0216 20:58:12.598186 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.608580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.608629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.608646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.608670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.608687 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.712756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.712827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.712852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.712884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.712978 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.750029 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:17:26.665872175 +0000 UTC Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.817220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.817256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.817266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.817281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.817295 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.920327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.920370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.920379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.920396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:12 crc kubenswrapper[4805]: I0216 20:58:12.920407 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:12Z","lastTransitionTime":"2026-02-16T20:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.023813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.023902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.023929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.023963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.023988 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.127297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.127340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.127382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.127400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.127413 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.229862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.229944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.229954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.229968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.229976 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.333545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.333607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.333624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.333648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.333665 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.437247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.437338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.437360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.437391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.437419 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.541036 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.541112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.541125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.541147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.541163 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.597078 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.597092 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:13 crc kubenswrapper[4805]: E0216 20:58:13.597270 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:13 crc kubenswrapper[4805]: E0216 20:58:13.597561 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.651710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.651811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.651833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.651863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.651884 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.674408 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.674354744 podStartE2EDuration="1m24.674354744s" podCreationTimestamp="2026-02-16 20:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.649629202 +0000 UTC m=+111.468312537" watchObservedRunningTime="2026-02-16 20:58:13.674354744 +0000 UTC m=+111.493038049" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.750244 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:44:00.396343938 +0000 UTC Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.754181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.754355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.754466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.754569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.754672 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.763220 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.763200078 podStartE2EDuration="1m29.763200078s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.762758386 +0000 UTC m=+111.581441681" watchObservedRunningTime="2026-02-16 20:58:13.763200078 +0000 UTC m=+111.581883373" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.792971 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-48h2w" podStartSLOduration=90.792949006 podStartE2EDuration="1m30.792949006s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.774666359 +0000 UTC m=+111.593349654" watchObservedRunningTime="2026-02-16 20:58:13.792949006 +0000 UTC m=+111.611632301" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.793260 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wmh7d" podStartSLOduration=89.793253495 podStartE2EDuration="1m29.793253495s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.792692069 +0000 UTC m=+111.611375374" watchObservedRunningTime="2026-02-16 20:58:13.793253495 +0000 UTC m=+111.611936790" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.808434 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podStartSLOduration=89.808413396 podStartE2EDuration="1m29.808413396s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.808399255 +0000 UTC m=+111.627082570" watchObservedRunningTime="2026-02-16 20:58:13.808413396 +0000 UTC m=+111.627096691" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.850632 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.850608853 podStartE2EDuration="55.850608853s" podCreationTimestamp="2026-02-16 20:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.850407577 +0000 UTC m=+111.669090882" watchObservedRunningTime="2026-02-16 20:58:13.850608853 +0000 UTC m=+111.669292148" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.857446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.857502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.857513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.857529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.857557 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.887566 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.887543806 podStartE2EDuration="37.887543806s" podCreationTimestamp="2026-02-16 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.868217751 +0000 UTC m=+111.686901056" watchObservedRunningTime="2026-02-16 20:58:13.887543806 +0000 UTC m=+111.706227101" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.902607 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8qwfz" podStartSLOduration=89.902581855 podStartE2EDuration="1m29.902581855s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.901964158 +0000 UTC m=+111.720647463" watchObservedRunningTime="2026-02-16 20:58:13.902581855 +0000 UTC m=+111.721265150" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.913371 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-c5pjk" podStartSLOduration=89.913345387 podStartE2EDuration="1m29.913345387s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.912167335 +0000 UTC m=+111.730850630" watchObservedRunningTime="2026-02-16 20:58:13.913345387 +0000 UTC m=+111.732028682" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.925456 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2qdfj" podStartSLOduration=88.925424795 podStartE2EDuration="1m28.925424795s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.925320193 +0000 UTC m=+111.744003488" watchObservedRunningTime="2026-02-16 20:58:13.925424795 +0000 UTC m=+111.744108110" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.952358 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.952339046 podStartE2EDuration="4.952339046s" podCreationTimestamp="2026-02-16 20:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:13.951039371 +0000 UTC m=+111.769722666" watchObservedRunningTime="2026-02-16 20:58:13.952339046 +0000 UTC m=+111.771022341" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.960234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.960276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.960286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.960301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:13 crc kubenswrapper[4805]: I0216 20:58:13.960312 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:13Z","lastTransitionTime":"2026-02-16T20:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.063762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.063821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.063832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.063852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.063867 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.169634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.169696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.169718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.169770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.169789 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.273634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.273697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.273763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.273788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.273802 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.377776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.377848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.377870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.377897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.377916 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.481606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.481675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.481692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.481766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.481785 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.585579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.585627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.585638 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.585658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.585671 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.597493 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:14 crc kubenswrapper[4805]: E0216 20:58:14.597681 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.597492 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:14 crc kubenswrapper[4805]: E0216 20:58:14.597920 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.690061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.690133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.690156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.690186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.690207 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.750593 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:10:26.205231144 +0000 UTC Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.792394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.792422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.792432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.792449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.792458 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.895427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.895498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.895522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.895570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.895597 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.998422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.998498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.998514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.998539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:14 crc kubenswrapper[4805]: I0216 20:58:14.998556 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:14Z","lastTransitionTime":"2026-02-16T20:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.101781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.101845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.101853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.101871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.101882 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:15Z","lastTransitionTime":"2026-02-16T20:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.205799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.205877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.205902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.205930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.205952 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:15Z","lastTransitionTime":"2026-02-16T20:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.312219 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.312328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.312345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.312373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.312388 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:15Z","lastTransitionTime":"2026-02-16T20:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.415359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.415406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.415422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.415444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.415461 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:15Z","lastTransitionTime":"2026-02-16T20:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.518623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.518660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.518676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.518697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.518713 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:15Z","lastTransitionTime":"2026-02-16T20:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.598764 4805 scope.go:117] "RemoveContainer" containerID="344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229" Feb 16 20:58:15 crc kubenswrapper[4805]: E0216 20:58:15.599093 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-crk96_openshift-ovn-kubernetes(8719b45e-eed5-4265-87de-46967022148f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.599372 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:15 crc kubenswrapper[4805]: E0216 20:58:15.599518 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.599366 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:15 crc kubenswrapper[4805]: E0216 20:58:15.599766 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.622791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.622853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.622875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.622902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.622924 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:15Z","lastTransitionTime":"2026-02-16T20:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.726533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.726671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.726696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.726753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.726776 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:15Z","lastTransitionTime":"2026-02-16T20:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.750868 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:49:38.945328144 +0000 UTC Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.829967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.830037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.830056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.830079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.830097 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:15Z","lastTransitionTime":"2026-02-16T20:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.933230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.933318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.933343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.933372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:15 crc kubenswrapper[4805]: I0216 20:58:15.933393 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:15Z","lastTransitionTime":"2026-02-16T20:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.045321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.045385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.045406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.045428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.045441 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.148506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.148564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.148579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.148600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.148616 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.252257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.252324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.252341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.252365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.252383 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.354927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.355009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.355033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.355247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.355272 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.459928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.460004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.460016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.460038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.460053 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.562606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.562677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.562694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.562749 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.562769 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.597282 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.597338 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:16 crc kubenswrapper[4805]: E0216 20:58:16.597478 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:16 crc kubenswrapper[4805]: E0216 20:58:16.597799 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.665908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.665954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.665966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.665989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.665999 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.751952 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:20:04.303466145 +0000 UTC Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.768948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.768995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.769008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.769024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.769038 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.873997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.874021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.874028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.874041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.874048 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.976542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.976575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.976585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.976607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:16 crc kubenswrapper[4805]: I0216 20:58:16.976617 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:16Z","lastTransitionTime":"2026-02-16T20:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.078934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.078984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.078997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.079012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.079023 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:17Z","lastTransitionTime":"2026-02-16T20:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.183373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.183451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.183463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.183479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.183490 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:17Z","lastTransitionTime":"2026-02-16T20:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.286633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.286690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.286703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.286745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.286769 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:17Z","lastTransitionTime":"2026-02-16T20:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.390080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.390143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.390160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.390185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.390205 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:17Z","lastTransitionTime":"2026-02-16T20:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.492995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.493047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.493068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.493098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.493120 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:17Z","lastTransitionTime":"2026-02-16T20:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.596171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.596232 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.596250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.596275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.596292 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:17Z","lastTransitionTime":"2026-02-16T20:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.596935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.596959 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:17 crc kubenswrapper[4805]: E0216 20:58:17.597077 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:17 crc kubenswrapper[4805]: E0216 20:58:17.597161 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.699647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.699693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.699705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.699754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.699767 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:17Z","lastTransitionTime":"2026-02-16T20:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.752628 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:33:50.453098525 +0000 UTC Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.802404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.802479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.802500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.802531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.802557 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:17Z","lastTransitionTime":"2026-02-16T20:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.905419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.905455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.905466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.905481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:17 crc kubenswrapper[4805]: I0216 20:58:17.905492 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:17Z","lastTransitionTime":"2026-02-16T20:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.008483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.008564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.008582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.008607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.008624 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.112560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.112652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.112678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.112707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.112771 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.216049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.216119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.216136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.216161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.216178 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.319098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.319157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.319173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.319195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.319213 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.422008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.422092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.422109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.422131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.422148 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.525307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.525377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.525400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.525427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.525445 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.597930 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.598006 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:18 crc kubenswrapper[4805]: E0216 20:58:18.598226 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:18 crc kubenswrapper[4805]: E0216 20:58:18.598435 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.628427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.628485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.628502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.628524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.628541 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.732011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.732079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.732103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.732131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.732151 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.753816 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:53:22.68773397 +0000 UTC Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.835091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.835140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.835155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.835178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.835195 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.938039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.938112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.938145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.938174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:18 crc kubenswrapper[4805]: I0216 20:58:18.938193 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:18Z","lastTransitionTime":"2026-02-16T20:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.041609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.041655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.041667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.041684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.041696 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.145050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.145111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.145132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.145159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.145180 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.248349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.248441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.248466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.248500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.248525 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.351020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.351074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.351087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.351107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.351118 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.452978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.453096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.453121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.453151 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.453173 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.555555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.555693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.555707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.555745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.555762 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.597594 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.597794 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:19 crc kubenswrapper[4805]: E0216 20:58:19.597975 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:19 crc kubenswrapper[4805]: E0216 20:58:19.598098 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.659122 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.659197 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.659216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.659241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.659271 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.754058 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:59:56.240558844 +0000 UTC Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.762289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.762375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.762403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.762436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.762462 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.866347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.866403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.866424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.866448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.866465 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.969478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.969537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.969555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.969597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:19 crc kubenswrapper[4805]: I0216 20:58:19.969631 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:19Z","lastTransitionTime":"2026-02-16T20:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.071830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.071899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.071910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.071926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.071937 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:20Z","lastTransitionTime":"2026-02-16T20:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.175176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.175231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.175244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.175264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.175277 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:20Z","lastTransitionTime":"2026-02-16T20:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.278391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.278475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.278501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.278530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.278553 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:20Z","lastTransitionTime":"2026-02-16T20:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.382143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.382209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.382228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.382254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.382273 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:20Z","lastTransitionTime":"2026-02-16T20:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.485846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.485928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.485950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.485980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.486002 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:20Z","lastTransitionTime":"2026-02-16T20:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.593033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.593172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.593195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.593228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.593250 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:20Z","lastTransitionTime":"2026-02-16T20:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.597342 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:20 crc kubenswrapper[4805]: E0216 20:58:20.597460 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.597359 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:20 crc kubenswrapper[4805]: E0216 20:58:20.597608 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.695586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.695657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.695679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.695706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.695774 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:20Z","lastTransitionTime":"2026-02-16T20:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.754564 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:28:28.602821144 +0000 UTC Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.798285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.798359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.798376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.798410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.798426 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:20Z","lastTransitionTime":"2026-02-16T20:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.902347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.902441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.902461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.902485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:20 crc kubenswrapper[4805]: I0216 20:58:20.902505 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:20Z","lastTransitionTime":"2026-02-16T20:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.006416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.006509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.006520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.006544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.006561 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.109616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.109689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.109716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.109795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.109815 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.212267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.212310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.212320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.212334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.212345 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.317486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.317566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.317588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.317618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.317649 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.421458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.421510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.421521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.421541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.421554 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.525159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.525207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.525217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.525236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.525269 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.598060 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.598116 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:21 crc kubenswrapper[4805]: E0216 20:58:21.598313 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:21 crc kubenswrapper[4805]: E0216 20:58:21.598398 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.628240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.628279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.628289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.628306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.628318 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.730974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.731022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.731033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.731050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.731062 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.755591 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:01:51.507740877 +0000 UTC Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.796645 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/1.log" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.797372 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/0.log" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.797444 4805 generic.go:334] "Generic (PLEG): container finished" podID="7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2" containerID="ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36" exitCode=1 Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.797485 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qwfz" event={"ID":"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2","Type":"ContainerDied","Data":"ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.797531 4805 scope.go:117] "RemoveContainer" containerID="cb37ce8371e1aebf17d377f4579bd4aee9db38897b0116c8affb6ee5f579193a" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.800475 4805 scope.go:117] "RemoveContainer" containerID="ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36" Feb 16 20:58:21 crc kubenswrapper[4805]: E0216 20:58:21.802803 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-8qwfz_openshift-multus(7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2)\"" pod="openshift-multus/multus-8qwfz" podUID="7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.835631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.835745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.835758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.835801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.835813 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.938018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.938068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.938080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.938097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:21 crc kubenswrapper[4805]: I0216 20:58:21.938112 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:21Z","lastTransitionTime":"2026-02-16T20:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.040517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.040545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.040553 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.040565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.040574 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:22Z","lastTransitionTime":"2026-02-16T20:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.143567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.143628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.143643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.143663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.143677 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:22Z","lastTransitionTime":"2026-02-16T20:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.246901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.246947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.246958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.246977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.246990 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:22Z","lastTransitionTime":"2026-02-16T20:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.349424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.349473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.349483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.349501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.349513 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:22Z","lastTransitionTime":"2026-02-16T20:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.453158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.453220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.453230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.453253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.453268 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:22Z","lastTransitionTime":"2026-02-16T20:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.556506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.556546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.556559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.556578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.556592 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:22Z","lastTransitionTime":"2026-02-16T20:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.597338 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.597525 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:22 crc kubenswrapper[4805]: E0216 20:58:22.597653 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:22 crc kubenswrapper[4805]: E0216 20:58:22.598244 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.660242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.660320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.660330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.660350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.660362 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:22Z","lastTransitionTime":"2026-02-16T20:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.739868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.739922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.739935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.739982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.739994 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T20:58:22Z","lastTransitionTime":"2026-02-16T20:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.757864 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:59:17.145596837 +0000 UTC Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.796622 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq"] Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.797142 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.799254 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.799494 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.801110 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.802569 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.803129 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/1.log" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.889684 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d4f424cb-888b-464a-a79f-190276fe9370-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.889826 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f424cb-888b-464a-a79f-190276fe9370-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.889871 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f424cb-888b-464a-a79f-190276fe9370-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.889901 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4f424cb-888b-464a-a79f-190276fe9370-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.889966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d4f424cb-888b-464a-a79f-190276fe9370-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.991561 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f424cb-888b-464a-a79f-190276fe9370-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.991613 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4f424cb-888b-464a-a79f-190276fe9370-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.991685 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d4f424cb-888b-464a-a79f-190276fe9370-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.991755 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d4f424cb-888b-464a-a79f-190276fe9370-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.991841 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f424cb-888b-464a-a79f-190276fe9370-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.991914 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d4f424cb-888b-464a-a79f-190276fe9370-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.991909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d4f424cb-888b-464a-a79f-190276fe9370-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:22 crc kubenswrapper[4805]: I0216 20:58:22.993576 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4f424cb-888b-464a-a79f-190276fe9370-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.000435 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f424cb-888b-464a-a79f-190276fe9370-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.020908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f424cb-888b-464a-a79f-190276fe9370-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zxwpq\" (UID: \"d4f424cb-888b-464a-a79f-190276fe9370\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.115167 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" Feb 16 20:58:23 crc kubenswrapper[4805]: E0216 20:58:23.585434 4805 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.597058 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.597183 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:23 crc kubenswrapper[4805]: E0216 20:58:23.598242 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:23 crc kubenswrapper[4805]: E0216 20:58:23.598610 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:23 crc kubenswrapper[4805]: E0216 20:58:23.735759 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.758125 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:15:51.467415762 +0000 UTC Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.758214 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.766505 4805 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.808682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" event={"ID":"d4f424cb-888b-464a-a79f-190276fe9370","Type":"ContainerStarted","Data":"cff16f1d315076e2b4c35fa0134c2916ae9f28a856eea9944d2f1015ae471bce"} Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.808807 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" event={"ID":"d4f424cb-888b-464a-a79f-190276fe9370","Type":"ContainerStarted","Data":"0046da0c428a44dafd40d03271ce40dda450cb4caf6771b1e43fccca23a9cb5e"} Feb 16 20:58:23 crc kubenswrapper[4805]: I0216 20:58:23.826283 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxwpq" podStartSLOduration=99.826251587 podStartE2EDuration="1m39.826251587s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:23.826063152 +0000 UTC m=+121.644746537" watchObservedRunningTime="2026-02-16 20:58:23.826251587 +0000 UTC m=+121.644934932" Feb 16 20:58:24 crc kubenswrapper[4805]: I0216 20:58:24.597456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:24 crc kubenswrapper[4805]: I0216 20:58:24.597456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:24 crc kubenswrapper[4805]: E0216 20:58:24.597654 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:24 crc kubenswrapper[4805]: E0216 20:58:24.597707 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:25 crc kubenswrapper[4805]: I0216 20:58:25.597792 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:25 crc kubenswrapper[4805]: I0216 20:58:25.597902 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:25 crc kubenswrapper[4805]: E0216 20:58:25.597936 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:25 crc kubenswrapper[4805]: E0216 20:58:25.598075 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:26 crc kubenswrapper[4805]: I0216 20:58:26.597208 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:26 crc kubenswrapper[4805]: I0216 20:58:26.597330 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:26 crc kubenswrapper[4805]: E0216 20:58:26.597357 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:26 crc kubenswrapper[4805]: E0216 20:58:26.597571 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:27 crc kubenswrapper[4805]: I0216 20:58:27.597017 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:27 crc kubenswrapper[4805]: I0216 20:58:27.597119 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:27 crc kubenswrapper[4805]: E0216 20:58:27.597205 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:27 crc kubenswrapper[4805]: E0216 20:58:27.597340 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:28 crc kubenswrapper[4805]: I0216 20:58:28.597388 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:28 crc kubenswrapper[4805]: I0216 20:58:28.597443 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:28 crc kubenswrapper[4805]: E0216 20:58:28.597705 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:28 crc kubenswrapper[4805]: E0216 20:58:28.597843 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:28 crc kubenswrapper[4805]: I0216 20:58:28.598937 4805 scope.go:117] "RemoveContainer" containerID="344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229" Feb 16 20:58:28 crc kubenswrapper[4805]: E0216 20:58:28.736714 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 20:58:28 crc kubenswrapper[4805]: I0216 20:58:28.827664 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/3.log" Feb 16 20:58:28 crc kubenswrapper[4805]: I0216 20:58:28.830262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerStarted","Data":"b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1"} Feb 16 20:58:28 crc kubenswrapper[4805]: I0216 20:58:28.831182 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:58:29 crc kubenswrapper[4805]: I0216 20:58:29.415610 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podStartSLOduration=105.415589328 podStartE2EDuration="1m45.415589328s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:28.875283578 +0000 UTC m=+126.693966903" watchObservedRunningTime="2026-02-16 20:58:29.415589328 +0000 UTC m=+127.234272623" Feb 16 20:58:29 crc kubenswrapper[4805]: I0216 20:58:29.416537 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-b6xdh"] Feb 16 20:58:29 crc kubenswrapper[4805]: I0216 20:58:29.416618 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:29 crc kubenswrapper[4805]: E0216 20:58:29.416696 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:29 crc kubenswrapper[4805]: I0216 20:58:29.596808 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:29 crc kubenswrapper[4805]: I0216 20:58:29.596812 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:29 crc kubenswrapper[4805]: E0216 20:58:29.597010 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:29 crc kubenswrapper[4805]: E0216 20:58:29.597071 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:30 crc kubenswrapper[4805]: I0216 20:58:30.597268 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:30 crc kubenswrapper[4805]: E0216 20:58:30.597463 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:31 crc kubenswrapper[4805]: I0216 20:58:31.597030 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:31 crc kubenswrapper[4805]: I0216 20:58:31.597062 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:31 crc kubenswrapper[4805]: E0216 20:58:31.597271 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:31 crc kubenswrapper[4805]: E0216 20:58:31.597683 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:31 crc kubenswrapper[4805]: I0216 20:58:31.598514 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:31 crc kubenswrapper[4805]: E0216 20:58:31.598680 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:32 crc kubenswrapper[4805]: I0216 20:58:32.597092 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:32 crc kubenswrapper[4805]: E0216 20:58:32.597453 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:33 crc kubenswrapper[4805]: I0216 20:58:33.597255 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:33 crc kubenswrapper[4805]: I0216 20:58:33.597261 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:33 crc kubenswrapper[4805]: I0216 20:58:33.597276 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:33 crc kubenswrapper[4805]: E0216 20:58:33.598300 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:33 crc kubenswrapper[4805]: I0216 20:58:33.598391 4805 scope.go:117] "RemoveContainer" containerID="ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36" Feb 16 20:58:33 crc kubenswrapper[4805]: E0216 20:58:33.598488 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:33 crc kubenswrapper[4805]: E0216 20:58:33.598536 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:33 crc kubenswrapper[4805]: E0216 20:58:33.737666 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 20:58:34 crc kubenswrapper[4805]: I0216 20:58:34.597183 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:34 crc kubenswrapper[4805]: E0216 20:58:34.597319 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:34 crc kubenswrapper[4805]: I0216 20:58:34.849582 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/1.log" Feb 16 20:58:34 crc kubenswrapper[4805]: I0216 20:58:34.849636 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qwfz" event={"ID":"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2","Type":"ContainerStarted","Data":"525eb5bad3094f13416cbe9634fedc7514417458399df4d37ede4cc0a0909ad2"} Feb 16 20:58:35 crc kubenswrapper[4805]: I0216 20:58:35.597597 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:35 crc kubenswrapper[4805]: I0216 20:58:35.597849 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:35 crc kubenswrapper[4805]: E0216 20:58:35.598061 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:35 crc kubenswrapper[4805]: I0216 20:58:35.598086 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:35 crc kubenswrapper[4805]: E0216 20:58:35.598242 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:35 crc kubenswrapper[4805]: E0216 20:58:35.598467 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:36 crc kubenswrapper[4805]: I0216 20:58:36.596852 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:36 crc kubenswrapper[4805]: E0216 20:58:36.597036 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:37 crc kubenswrapper[4805]: I0216 20:58:37.597357 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:37 crc kubenswrapper[4805]: I0216 20:58:37.597461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:37 crc kubenswrapper[4805]: E0216 20:58:37.597573 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 20:58:37 crc kubenswrapper[4805]: I0216 20:58:37.597396 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:37 crc kubenswrapper[4805]: E0216 20:58:37.597699 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 20:58:37 crc kubenswrapper[4805]: E0216 20:58:37.598083 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b6xdh" podUID="68747e4a-6576-44c3-b663-250315f6712f" Feb 16 20:58:38 crc kubenswrapper[4805]: I0216 20:58:38.597348 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:38 crc kubenswrapper[4805]: E0216 20:58:38.597590 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 20:58:38 crc kubenswrapper[4805]: I0216 20:58:38.653814 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 20:58:39 crc kubenswrapper[4805]: I0216 20:58:39.597871 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:58:39 crc kubenswrapper[4805]: I0216 20:58:39.597912 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:39 crc kubenswrapper[4805]: I0216 20:58:39.598106 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:39 crc kubenswrapper[4805]: I0216 20:58:39.601834 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 20:58:39 crc kubenswrapper[4805]: I0216 20:58:39.602409 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 20:58:39 crc kubenswrapper[4805]: I0216 20:58:39.603314 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 20:58:39 crc kubenswrapper[4805]: I0216 20:58:39.603356 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 20:58:40 crc kubenswrapper[4805]: I0216 20:58:40.597751 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:40 crc kubenswrapper[4805]: I0216 20:58:40.600547 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 20:58:40 crc kubenswrapper[4805]: I0216 20:58:40.600820 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.166058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.220146 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-prjll"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.221241 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.231477 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.232127 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.232335 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.232539 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.232770 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.232961 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.233160 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.233371 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.233592 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.237906 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-76qvc"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.243256 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.243614 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.246077 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.266287 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.270825 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.271240 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.271437 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.281393 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.281620 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.281774 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.290036 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.297977 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-h2zb9"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.298375 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.298575 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.298394 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wwm8v"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.299505 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.300139 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.300284 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.300555 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-45jmj"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.300574 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.301368 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87pxp"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.301792 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.302119 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.302504 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fzjtf"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.302825 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w44f5"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.303102 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.303365 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.303715 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.306221 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.306614 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.307820 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.312889 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.316325 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.316468 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.317877 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318585 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-etcd-serving-ca\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318628 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-config\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318652 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-image-import-ca\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318700 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fc38c573-234c-4867-b170-adabd2bee815-encryption-config\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318743 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5q9r\" (UniqueName: \"kubernetes.io/projected/fc38c573-234c-4867-b170-adabd2bee815-kube-api-access-q5q9r\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318784 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-serving-cert\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318805 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc38c573-234c-4867-b170-adabd2bee815-serving-cert\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318826 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-trusted-ca-bundle\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318849 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fc38c573-234c-4867-b170-adabd2bee815-node-pullsecrets\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318903 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rkls\" (UniqueName: \"kubernetes.io/projected/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-kube-api-access-8rkls\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318926 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc38c573-234c-4867-b170-adabd2bee815-audit-dir\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.318971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-service-ca-bundle\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.319281 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-audit\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.319311 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fc38c573-234c-4867-b170-adabd2bee815-etcd-client\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.319335 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-config\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.319647 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.319715 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2t9r2"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.319837 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.320013 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.320260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2t9r2" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.320466 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.320509 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.320266 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.324482 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.324611 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.324745 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.324866 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.324976 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.325092 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.325196 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.325298 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.325427 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.325537 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.325792 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.325871 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.325935 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.325938 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.326085 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.326121 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.326204 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.326211 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.334140 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.334443 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.335026 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.335310 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.336027 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.340428 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.342976 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.344259 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.344751 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.348339 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r4r7b"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.358390 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.361447 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.361538 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.361607 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.361695 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.361792 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.362008 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.362563 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.362993 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363170 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363230 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363249 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363308 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363359 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363477 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363495 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363561 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363610 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363670 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363180 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363808 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364035 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364114 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.363483 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364350 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364370 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364519 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364554 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364596 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364706 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364865 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.364931 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.365115 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.365177 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.365279 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.365328 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.365418 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.365538 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.365558 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.365575 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.365665 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.367346 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.367569 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.367694 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.367827 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.368001 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.368022 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.368178 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.369561 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.372429 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.375597 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.375653 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.375870 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.376264 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.382090 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.382341 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.382466 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.383065 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.383065 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.384302 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.385003 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.387286 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-psbvs"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.387840 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.388267 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.388486 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.388832 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.391016 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hkpgz"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.394498 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.397858 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.398273 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.403512 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.415321 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.416645 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.417006 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.418517 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.419491 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.421752 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422426 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba323e22-bfbd-4b64-99ab-4695831c69a7-metrics-tls\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422463 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f398957-312d-4861-8620-ea7ca65d9bc7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wvx8x\" (UID: \"9f398957-312d-4861-8620-ea7ca65d9bc7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422488 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-etcd-serving-ca\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422538 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-client-ca\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2fea5ff9-3c84-499c-aca5-f7af4320a677-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jbblk\" (UID: \"2fea5ff9-3c84-499c-aca5-f7af4320a677\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422695 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-serving-cert\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422734 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba323e22-bfbd-4b64-99ab-4695831c69a7-trusted-ca\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422906 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc45d729-38e4-4964-b5b6-de896f734fe8-serving-cert\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422929 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m2mx\" (UniqueName: \"kubernetes.io/projected/a17754e1-d84d-4025-9398-2ad41d0f8da6-kube-api-access-7m2mx\") pod \"openshift-controller-manager-operator-756b6f6bc6-mvqf6\" (UID: \"a17754e1-d84d-4025-9398-2ad41d0f8da6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422954 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d04bc3d4-3e2d-489d-83ad-77893578d020-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.422980 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37168f5d-63ef-497d-bbbf-06b677a39490-metrics-tls\") pod \"dns-operator-744455d44c-87pxp\" (UID: \"37168f5d-63ef-497d-bbbf-06b677a39490\") " pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423008 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7f5646d3-161a-4168-8793-6b7372b1fc9b-machine-approver-tls\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423034 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d04bc3d4-3e2d-489d-83ad-77893578d020-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423097 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-config\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423119 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-image-import-ca\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423149 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fc38c573-234c-4867-b170-adabd2bee815-encryption-config\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423179 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17754e1-d84d-4025-9398-2ad41d0f8da6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mvqf6\" (UID: \"a17754e1-d84d-4025-9398-2ad41d0f8da6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423203 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba323e22-bfbd-4b64-99ab-4695831c69a7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423267 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5q9r\" (UniqueName: \"kubernetes.io/projected/fc38c573-234c-4867-b170-adabd2bee815-kube-api-access-q5q9r\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423362 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rw4z\" (UniqueName: \"kubernetes.io/projected/7f5646d3-161a-4168-8793-6b7372b1fc9b-kube-api-access-9rw4z\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-encryption-config\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423648 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-audit-policies\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.423855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424113 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f398957-312d-4861-8620-ea7ca65d9bc7-config\") pod \"kube-controller-manager-operator-78b949d7b-wvx8x\" (UID: \"9f398957-312d-4861-8620-ea7ca65d9bc7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424186 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-serving-cert\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424253 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc38c573-234c-4867-b170-adabd2bee815-serving-cert\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424287 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-trusted-ca-bundle\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424324 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59jl6\" (UniqueName: \"kubernetes.io/projected/cc45d729-38e4-4964-b5b6-de896f734fe8-kube-api-access-59jl6\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nrsr\" (UniqueName: \"kubernetes.io/projected/ed3fdbaa-9dfc-4f4f-991e-1710c03738f4-kube-api-access-8nrsr\") pod \"cluster-samples-operator-665b6dd947-xsg2r\" (UID: \"ed3fdbaa-9dfc-4f4f-991e-1710c03738f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424548 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fc38c573-234c-4867-b170-adabd2bee815-node-pullsecrets\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424582 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17754e1-d84d-4025-9398-2ad41d0f8da6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mvqf6\" (UID: \"a17754e1-d84d-4025-9398-2ad41d0f8da6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424609 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f398957-312d-4861-8620-ea7ca65d9bc7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wvx8x\" (UID: \"9f398957-312d-4861-8620-ea7ca65d9bc7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424696 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5646d3-161a-4168-8793-6b7372b1fc9b-config\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424751 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxd88\" (UniqueName: \"kubernetes.io/projected/ba323e22-bfbd-4b64-99ab-4695831c69a7-kube-api-access-kxd88\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-config\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424822 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fea5ff9-3c84-499c-aca5-f7af4320a677-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jbblk\" (UID: \"2fea5ff9-3c84-499c-aca5-f7af4320a677\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424859 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d04bc3d4-3e2d-489d-83ad-77893578d020-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-image-import-ca\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngctt\" (UniqueName: \"kubernetes.io/projected/37168f5d-63ef-497d-bbbf-06b677a39490-kube-api-access-ngctt\") pod \"dns-operator-744455d44c-87pxp\" (UID: \"37168f5d-63ef-497d-bbbf-06b677a39490\") " pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.424992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-audit-dir\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.425042 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlrh\" (UniqueName: \"kubernetes.io/projected/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-kube-api-access-qmlrh\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.425045 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.425078 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rkls\" (UniqueName: \"kubernetes.io/projected/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-kube-api-access-8rkls\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.425110 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc38c573-234c-4867-b170-adabd2bee815-audit-dir\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.426070 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-etcd-serving-ca\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.425955 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fc38c573-234c-4867-b170-adabd2bee815-node-pullsecrets\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.426150 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc38c573-234c-4867-b170-adabd2bee815-audit-dir\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.426229 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-service-ca-bundle\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.427596 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-config\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.428463 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-service-ca-bundle\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.429474 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.438953 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.440038 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-trusted-ca-bundle\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.440650 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.441624 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.442303 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.442563 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.443005 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.443296 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc38c573-234c-4867-b170-adabd2bee815-serving-cert\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.443715 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444034 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed3fdbaa-9dfc-4f4f-991e-1710c03738f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xsg2r\" (UID: \"ed3fdbaa-9dfc-4f4f-991e-1710c03738f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7f5646d3-161a-4168-8793-6b7372b1fc9b-auth-proxy-config\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444128 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fea5ff9-3c84-499c-aca5-f7af4320a677-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jbblk\" (UID: \"2fea5ff9-3c84-499c-aca5-f7af4320a677\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444173 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-etcd-client\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444275 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444293 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444272 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-audit\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fc38c573-234c-4867-b170-adabd2bee815-etcd-client\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444694 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnz6r\" (UniqueName: \"kubernetes.io/projected/d04bc3d4-3e2d-489d-83ad-77893578d020-kube-api-access-qnz6r\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444747 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-config\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.444895 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fc38c573-234c-4867-b170-adabd2bee815-audit\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.445108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fc38c573-234c-4867-b170-adabd2bee815-encryption-config\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.445114 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l4tc4"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.445383 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-serving-cert\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.445903 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.446638 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.447118 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.447791 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fc38c573-234c-4867-b170-adabd2bee815-etcd-client\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.451126 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hkjs5"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.451698 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.454416 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.454755 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-config\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.455227 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.456224 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.457190 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.458151 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-prjll"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.462972 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pp7j5"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.463882 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.463980 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.465823 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.467506 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h2zb9"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.469199 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.469544 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kzwl5"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.470384 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.470429 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.472105 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.473781 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87pxp"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.474494 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-45jmj"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.476905 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wwm8v"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.478155 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-76qvc"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.479185 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r4r7b"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.480199 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.481367 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.482681 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2t9r2"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.482810 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.484499 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-psbvs"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.486471 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.488528 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.490153 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.491700 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.493112 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.494799 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.501521 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.502813 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w44f5"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.503194 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.505006 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.506743 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l4tc4"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.508473 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.509678 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hkpgz"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.510881 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hkjs5"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.512329 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.523427 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.523511 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.525801 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r5h2d"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.527869 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fmmsk"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.528458 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.528744 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.528952 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.530124 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pp7j5"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.531216 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.532500 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.533861 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fmmsk"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.535209 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.536269 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r5h2d"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.537474 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.538649 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jvx2g"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.540584 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jvx2g"] Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.540687 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jvx2g" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.543258 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.545251 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-serving-cert\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.545332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba323e22-bfbd-4b64-99ab-4695831c69a7-trusted-ca\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.545415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc45d729-38e4-4964-b5b6-de896f734fe8-serving-cert\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.545503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m2mx\" (UniqueName: \"kubernetes.io/projected/a17754e1-d84d-4025-9398-2ad41d0f8da6-kube-api-access-7m2mx\") pod \"openshift-controller-manager-operator-756b6f6bc6-mvqf6\" (UID: \"a17754e1-d84d-4025-9398-2ad41d0f8da6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.545585 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d04bc3d4-3e2d-489d-83ad-77893578d020-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.545659 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37168f5d-63ef-497d-bbbf-06b677a39490-metrics-tls\") pod \"dns-operator-744455d44c-87pxp\" (UID: \"37168f5d-63ef-497d-bbbf-06b677a39490\") " pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.545749 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7f5646d3-161a-4168-8793-6b7372b1fc9b-machine-approver-tls\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.545842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d04bc3d4-3e2d-489d-83ad-77893578d020-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.546040 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rw4z\" (UniqueName: \"kubernetes.io/projected/7f5646d3-161a-4168-8793-6b7372b1fc9b-kube-api-access-9rw4z\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.546159 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-encryption-config\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.546259 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17754e1-d84d-4025-9398-2ad41d0f8da6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mvqf6\" (UID: \"a17754e1-d84d-4025-9398-2ad41d0f8da6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.546355 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba323e22-bfbd-4b64-99ab-4695831c69a7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.546467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-audit-policies\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.546605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.546711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f398957-312d-4861-8620-ea7ca65d9bc7-config\") pod \"kube-controller-manager-operator-78b949d7b-wvx8x\" (UID: \"9f398957-312d-4861-8620-ea7ca65d9bc7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.546864 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59jl6\" (UniqueName: \"kubernetes.io/projected/cc45d729-38e4-4964-b5b6-de896f734fe8-kube-api-access-59jl6\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.546967 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nrsr\" (UniqueName: \"kubernetes.io/projected/ed3fdbaa-9dfc-4f4f-991e-1710c03738f4-kube-api-access-8nrsr\") pod \"cluster-samples-operator-665b6dd947-xsg2r\" (UID: \"ed3fdbaa-9dfc-4f4f-991e-1710c03738f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547077 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5646d3-161a-4168-8793-6b7372b1fc9b-config\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547170 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17754e1-d84d-4025-9398-2ad41d0f8da6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mvqf6\" (UID: \"a17754e1-d84d-4025-9398-2ad41d0f8da6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547267 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f398957-312d-4861-8620-ea7ca65d9bc7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wvx8x\" (UID: \"9f398957-312d-4861-8620-ea7ca65d9bc7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547374 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxd88\" (UniqueName: \"kubernetes.io/projected/ba323e22-bfbd-4b64-99ab-4695831c69a7-kube-api-access-kxd88\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d04bc3d4-3e2d-489d-83ad-77893578d020-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547482 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547551 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-config\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fea5ff9-3c84-499c-aca5-f7af4320a677-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jbblk\" (UID: \"2fea5ff9-3c84-499c-aca5-f7af4320a677\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-audit-dir\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547812 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5646d3-161a-4168-8793-6b7372b1fc9b-config\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547120 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17754e1-d84d-4025-9398-2ad41d0f8da6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mvqf6\" (UID: \"a17754e1-d84d-4025-9398-2ad41d0f8da6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-audit-dir\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.547470 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-audit-policies\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.548324 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fea5ff9-3c84-499c-aca5-f7af4320a677-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jbblk\" (UID: \"2fea5ff9-3c84-499c-aca5-f7af4320a677\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.548387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlrh\" (UniqueName: \"kubernetes.io/projected/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-kube-api-access-qmlrh\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.548527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-serving-cert\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.548777 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d04bc3d4-3e2d-489d-83ad-77893578d020-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.548829 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngctt\" (UniqueName: \"kubernetes.io/projected/37168f5d-63ef-497d-bbbf-06b677a39490-kube-api-access-ngctt\") pod \"dns-operator-744455d44c-87pxp\" (UID: \"37168f5d-63ef-497d-bbbf-06b677a39490\") " pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.549051 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37168f5d-63ef-497d-bbbf-06b677a39490-metrics-tls\") pod \"dns-operator-744455d44c-87pxp\" (UID: \"37168f5d-63ef-497d-bbbf-06b677a39490\") " pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.549063 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-encryption-config\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.549382 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-config\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.549710 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc45d729-38e4-4964-b5b6-de896f734fe8-serving-cert\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.550208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed3fdbaa-9dfc-4f4f-991e-1710c03738f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xsg2r\" (UID: \"ed3fdbaa-9dfc-4f4f-991e-1710c03738f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.550956 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7f5646d3-161a-4168-8793-6b7372b1fc9b-machine-approver-tls\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.551357 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d04bc3d4-3e2d-489d-83ad-77893578d020-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.551527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7f5646d3-161a-4168-8793-6b7372b1fc9b-auth-proxy-config\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.550525 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7f5646d3-161a-4168-8793-6b7372b1fc9b-auth-proxy-config\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.552537 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fea5ff9-3c84-499c-aca5-f7af4320a677-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jbblk\" (UID: \"2fea5ff9-3c84-499c-aca5-f7af4320a677\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.552583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-etcd-client\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.552658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnz6r\" (UniqueName: \"kubernetes.io/projected/d04bc3d4-3e2d-489d-83ad-77893578d020-kube-api-access-qnz6r\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.552747 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba323e22-bfbd-4b64-99ab-4695831c69a7-metrics-tls\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.552787 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f398957-312d-4861-8620-ea7ca65d9bc7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wvx8x\" (UID: \"9f398957-312d-4861-8620-ea7ca65d9bc7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.552821 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.552857 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-client-ca\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.552890 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2fea5ff9-3c84-499c-aca5-f7af4320a677-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jbblk\" (UID: \"2fea5ff9-3c84-499c-aca5-f7af4320a677\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.553335 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.553667 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-client-ca\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.553999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17754e1-d84d-4025-9398-2ad41d0f8da6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mvqf6\" (UID: \"a17754e1-d84d-4025-9398-2ad41d0f8da6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.554302 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed3fdbaa-9dfc-4f4f-991e-1710c03738f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xsg2r\" (UID: \"ed3fdbaa-9dfc-4f4f-991e-1710c03738f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.555398 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-etcd-client\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.557031 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fea5ff9-3c84-499c-aca5-f7af4320a677-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jbblk\" (UID: \"2fea5ff9-3c84-499c-aca5-f7af4320a677\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.603353 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.623310 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.630901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f398957-312d-4861-8620-ea7ca65d9bc7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wvx8x\" (UID: \"9f398957-312d-4861-8620-ea7ca65d9bc7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.644085 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.657835 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba323e22-bfbd-4b64-99ab-4695831c69a7-metrics-tls\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.663272 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.668560 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f398957-312d-4861-8620-ea7ca65d9bc7-config\") pod \"kube-controller-manager-operator-78b949d7b-wvx8x\" (UID: \"9f398957-312d-4861-8620-ea7ca65d9bc7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.684314 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.713109 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.718378 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba323e22-bfbd-4b64-99ab-4695831c69a7-trusted-ca\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.723347 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.744317 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.764181 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.784288 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.806181 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.824224 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.845256 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.864577 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.885014 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.904319 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.925784 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.944423 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.965309 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 20:58:43 crc kubenswrapper[4805]: I0216 20:58:43.984420 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.014603 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.024149 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.044021 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.065035 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.084332 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.105004 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.124533 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.143873 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.164188 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.184700 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.208284 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.224308 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.244099 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.264324 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.284138 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.304342 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.324470 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.343867 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.384136 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.392076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5q9r\" (UniqueName: \"kubernetes.io/projected/fc38c573-234c-4867-b170-adabd2bee815-kube-api-access-q5q9r\") pod \"apiserver-76f77b778f-prjll\" (UID: \"fc38c573-234c-4867-b170-adabd2bee815\") " pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.427252 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.431333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rkls\" (UniqueName: \"kubernetes.io/projected/53e7b659-cdf1-46eb-8e7c-a3361eef84a0-kube-api-access-8rkls\") pod \"authentication-operator-69f744f599-76qvc\" (UID: \"53e7b659-cdf1-46eb-8e7c-a3361eef84a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.441712 4805 request.go:700] Waited for 1.002293812s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.444939 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.464360 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.471161 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.483952 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.497561 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.504480 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.524398 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.543910 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.565003 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.584604 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.605275 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.628651 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.644451 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.665372 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.684194 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.704654 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.724244 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.740756 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-76qvc"] Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.744218 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 20:58:44 crc kubenswrapper[4805]: W0216 20:58:44.751032 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53e7b659_cdf1_46eb_8e7c_a3361eef84a0.slice/crio-4625047db5dcf9be62e91eb89f35bb9dc145036065a116fd17f05b3d521f2873 WatchSource:0}: Error finding container 4625047db5dcf9be62e91eb89f35bb9dc145036065a116fd17f05b3d521f2873: Status 404 returned error can't find the container with id 4625047db5dcf9be62e91eb89f35bb9dc145036065a116fd17f05b3d521f2873 Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.765179 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.788259 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-prjll"] Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.789311 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.803313 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: W0216 20:58:44.804368 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc38c573_234c_4867_b170_adabd2bee815.slice/crio-0ba775f7830fce7bfd36f2579388944b89b4c5a3539daf675d6de033c3ac1251 WatchSource:0}: Error finding container 0ba775f7830fce7bfd36f2579388944b89b4c5a3539daf675d6de033c3ac1251: Status 404 returned error can't find the container with id 0ba775f7830fce7bfd36f2579388944b89b4c5a3539daf675d6de033c3ac1251 Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.824009 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.843411 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.863957 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.884167 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.891163 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" event={"ID":"53e7b659-cdf1-46eb-8e7c-a3361eef84a0","Type":"ContainerStarted","Data":"38c84df2be55efc7c3c276f4b7e6787aa72b6756f6a1598c8d089745f3bb26ad"} Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.891223 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" event={"ID":"53e7b659-cdf1-46eb-8e7c-a3361eef84a0","Type":"ContainerStarted","Data":"4625047db5dcf9be62e91eb89f35bb9dc145036065a116fd17f05b3d521f2873"} Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.892536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-prjll" event={"ID":"fc38c573-234c-4867-b170-adabd2bee815","Type":"ContainerStarted","Data":"0ba775f7830fce7bfd36f2579388944b89b4c5a3539daf675d6de033c3ac1251"} Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.904330 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.923514 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.944258 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.964220 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 20:58:44 crc kubenswrapper[4805]: I0216 20:58:44.983353 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.003925 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.023398 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.043285 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.064288 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.083601 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.104270 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.126235 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.144444 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.163679 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.183930 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.204873 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.223555 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.244697 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.264496 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.282962 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.304360 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.323796 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.368452 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m2mx\" (UniqueName: \"kubernetes.io/projected/a17754e1-d84d-4025-9398-2ad41d0f8da6-kube-api-access-7m2mx\") pod \"openshift-controller-manager-operator-756b6f6bc6-mvqf6\" (UID: \"a17754e1-d84d-4025-9398-2ad41d0f8da6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.383016 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d04bc3d4-3e2d-489d-83ad-77893578d020-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.402116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rw4z\" (UniqueName: \"kubernetes.io/projected/7f5646d3-161a-4168-8793-6b7372b1fc9b-kube-api-access-9rw4z\") pod \"machine-approver-56656f9798-cd6vq\" (UID: \"7f5646d3-161a-4168-8793-6b7372b1fc9b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.423535 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.429403 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba323e22-bfbd-4b64-99ab-4695831c69a7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.442652 4805 request.go:700] Waited for 1.895024538s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.442677 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59jl6\" (UniqueName: \"kubernetes.io/projected/cc45d729-38e4-4964-b5b6-de896f734fe8-kube-api-access-59jl6\") pod \"route-controller-manager-6576b87f9c-gsn4v\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:45 crc kubenswrapper[4805]: W0216 20:58:45.446551 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f5646d3_161a_4168_8793_6b7372b1fc9b.slice/crio-b636550f5323934a7c2e71954a7481370d74878d369a0d99bd4422cde8d2f79a WatchSource:0}: Error finding container b636550f5323934a7c2e71954a7481370d74878d369a0d99bd4422cde8d2f79a: Status 404 returned error can't find the container with id b636550f5323934a7c2e71954a7481370d74878d369a0d99bd4422cde8d2f79a Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.472008 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nrsr\" (UniqueName: \"kubernetes.io/projected/ed3fdbaa-9dfc-4f4f-991e-1710c03738f4-kube-api-access-8nrsr\") pod \"cluster-samples-operator-665b6dd947-xsg2r\" (UID: \"ed3fdbaa-9dfc-4f4f-991e-1710c03738f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.483922 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxd88\" (UniqueName: \"kubernetes.io/projected/ba323e22-bfbd-4b64-99ab-4695831c69a7-kube-api-access-kxd88\") pod \"ingress-operator-5b745b69d9-fps5r\" (UID: \"ba323e22-bfbd-4b64-99ab-4695831c69a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.500898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlrh\" (UniqueName: \"kubernetes.io/projected/2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d-kube-api-access-qmlrh\") pod \"apiserver-7bbb656c7d-bw2cs\" (UID: \"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.527987 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngctt\" (UniqueName: \"kubernetes.io/projected/37168f5d-63ef-497d-bbbf-06b677a39490-kube-api-access-ngctt\") pod \"dns-operator-744455d44c-87pxp\" (UID: \"37168f5d-63ef-497d-bbbf-06b677a39490\") " pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.549203 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnz6r\" (UniqueName: \"kubernetes.io/projected/d04bc3d4-3e2d-489d-83ad-77893578d020-kube-api-access-qnz6r\") pod \"cluster-image-registry-operator-dc59b4c8b-nh6q4\" (UID: \"d04bc3d4-3e2d-489d-83ad-77893578d020\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.557433 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f398957-312d-4861-8620-ea7ca65d9bc7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wvx8x\" (UID: \"9f398957-312d-4861-8620-ea7ca65d9bc7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.577667 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.587167 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.596208 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2fea5ff9-3c84-499c-aca5-f7af4320a677-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jbblk\" (UID: \"2fea5ff9-3c84-499c-aca5-f7af4320a677\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.620878 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.635586 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.649809 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.678288 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686317 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686359 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97145a00-5917-496b-8eaa-48da22c29d3d-audit-dir\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686382 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-console-config\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686405 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-config\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686428 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-bound-sa-token\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686450 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-certificates\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686513 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239ca8a8-a575-4ac6-a6c9-df5f9aed2db6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7xtnt\" (UID: \"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686572 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686716 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj5j4\" (UniqueName: \"kubernetes.io/projected/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-kube-api-access-xj5j4\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686759 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-service-ca\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.686777 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckqcd\" (UniqueName: \"kubernetes.io/projected/239ca8a8-a575-4ac6-a6c9-df5f9aed2db6-kube-api-access-ckqcd\") pod \"openshift-apiserver-operator-796bbdcf4f-7xtnt\" (UID: \"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687210 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-client-ca\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687233 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687249 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4ff\" (UniqueName: \"kubernetes.io/projected/e81b583e-8f61-44e8-b206-2e7b31ca3626-kube-api-access-4z4ff\") pod \"downloads-7954f5f757-2t9r2\" (UID: \"e81b583e-8f61-44e8-b206-2e7b31ca3626\") " pod="openshift-console/downloads-7954f5f757-2t9r2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687266 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687296 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-serving-cert\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dlsr\" (UniqueName: \"kubernetes.io/projected/97145a00-5917-496b-8eaa-48da22c29d3d-kube-api-access-9dlsr\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-stats-auth\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-oauth-serving-cert\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687531 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-service-ca-bundle\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9fzl\" (UniqueName: \"kubernetes.io/projected/5f9120dc-89fa-43b6-b757-925e25598369-kube-api-access-c9fzl\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687590 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687621 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-oauth-config\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687759 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687845 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-audit-policies\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687871 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.687942 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688253 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: E0216 20:58:45.688265 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.188253302 +0000 UTC m=+144.006936597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9120dc-89fa-43b6-b757-925e25598369-config\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/239ca8a8-a575-4ac6-a6c9-df5f9aed2db6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7xtnt\" (UID: \"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688592 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-trusted-ca\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688613 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-serving-cert\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688636 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvhz\" (UniqueName: \"kubernetes.io/projected/2530eb64-2099-45e0-9727-ea9987f22ed5-kube-api-access-7dvhz\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688659 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-default-certificate\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688683 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f9120dc-89fa-43b6-b757-925e25598369-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688770 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688857 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688901 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-tls\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.688987 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.689010 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvgt9\" (UniqueName: \"kubernetes.io/projected/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-kube-api-access-hvgt9\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.689041 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-metrics-certs\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.689093 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f9120dc-89fa-43b6-b757-925e25598369-images\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.689295 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-trusted-ca-bundle\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.689361 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5n4\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-kube-api-access-ns5n4\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.783181 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790179 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:45 crc kubenswrapper[4805]: E0216 20:58:45.790386 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.290353897 +0000 UTC m=+144.109037212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790456 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/54234ec6-6906-4706-8baf-839fc054773b-certs\") pod \"machine-config-server-kzwl5\" (UID: \"54234ec6-6906-4706-8baf-839fc054773b\") " pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790504 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f43042e-2586-44f6-93f1-0a0284d35381-config-volume\") pod \"dns-default-fmmsk\" (UID: \"0f43042e-2586-44f6-93f1-0a0284d35381\") " pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790548 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4bb5\" (UniqueName: \"kubernetes.io/projected/298ba8df-baa6-4b79-b9dd-078f03b74975-kube-api-access-k4bb5\") pod \"ingress-canary-jvx2g\" (UID: \"298ba8df-baa6-4b79-b9dd-078f03b74975\") " pod="openshift-ingress-canary/ingress-canary-jvx2g" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f9120dc-89fa-43b6-b757-925e25598369-images\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790748 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790776 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvgt9\" (UniqueName: \"kubernetes.io/projected/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-kube-api-access-hvgt9\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790810 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-metrics-certs\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790847 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fe20d46-1095-4f1e-b29b-bdce644a87b5-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4brg\" (UID: \"7fe20d46-1095-4f1e-b29b-bdce644a87b5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-trusted-ca-bundle\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790914 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-etcd-ca\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790948 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b7jmg\" (UID: \"83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.790985 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a451d6a2-4e84-4838-89be-08a88869a68e-secret-volume\") pod \"collect-profiles-29521245-h42wk\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791117 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns5n4\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-kube-api-access-ns5n4\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91dbd059-66ac-4a69-b40d-c444e771f9b1-config\") pod \"service-ca-operator-777779d784-wf9n2\" (UID: \"91dbd059-66ac-4a69-b40d-c444e771f9b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791215 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97145a00-5917-496b-8eaa-48da22c29d3d-audit-dir\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791236 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-console-config\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791265 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbf84de5-d8b7-4f52-98ec-76d973dc290c-proxy-tls\") pod \"machine-config-controller-84d6567774-btkhr\" (UID: \"bbf84de5-d8b7-4f52-98ec-76d973dc290c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791290 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-config\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-images\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239ca8a8-a575-4ac6-a6c9-df5f9aed2db6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7xtnt\" (UID: \"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791399 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/54234ec6-6906-4706-8baf-839fc054773b-node-bootstrap-token\") pod \"machine-config-server-kzwl5\" (UID: \"54234ec6-6906-4706-8baf-839fc054773b\") " pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791421 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ebd63b-db13-4e5d-bc30-aaf3469daab4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvfgn\" (UID: \"a7ebd63b-db13-4e5d-bc30-aaf3469daab4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/298ba8df-baa6-4b79-b9dd-078f03b74975-cert\") pod \"ingress-canary-jvx2g\" (UID: \"298ba8df-baa6-4b79-b9dd-078f03b74975\") " pod="openshift-ingress-canary/ingress-canary-jvx2g" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791483 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791505 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj5j4\" (UniqueName: \"kubernetes.io/projected/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-kube-api-access-xj5j4\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791529 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-service-ca\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791552 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hkjs5\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckqcd\" (UniqueName: \"kubernetes.io/projected/239ca8a8-a575-4ac6-a6c9-df5f9aed2db6-kube-api-access-ckqcd\") pod \"openshift-apiserver-operator-796bbdcf4f-7xtnt\" (UID: \"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-etcd-service-ca\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l65v9\" (UniqueName: \"kubernetes.io/projected/18bc5c62-7469-4926-a3e2-fe9eb48844c8-kube-api-access-l65v9\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791694 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf703159-340e-4a50-a7fa-4eb5402fabbf-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l4tc4\" (UID: \"cf703159-340e-4a50-a7fa-4eb5402fabbf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66159345-0259-49a9-a234-ce7520f5b6c6-serving-cert\") pod \"openshift-config-operator-7777fb866f-t4bkt\" (UID: \"66159345-0259-49a9-a234-ce7520f5b6c6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791782 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-etcd-client\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791805 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd240135-deca-4bb9-907c-0fb3995a76a5-serving-cert\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791829 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7rsf\" (UniqueName: \"kubernetes.io/projected/a451d6a2-4e84-4838-89be-08a88869a68e-kube-api-access-m7rsf\") pod \"collect-profiles-29521245-h42wk\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791890 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-serving-cert\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791914 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/66159345-0259-49a9-a234-ce7520f5b6c6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-t4bkt\" (UID: \"66159345-0259-49a9-a234-ce7520f5b6c6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791939 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dlsr\" (UniqueName: \"kubernetes.io/projected/97145a00-5917-496b-8eaa-48da22c29d3d-kube-api-access-9dlsr\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791960 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-stats-auth\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.791984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/61c85f40-93cf-46d4-8a43-751ed991de0c-tmpfs\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792011 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6c9v\" (UniqueName: \"kubernetes.io/projected/54234ec6-6906-4706-8baf-839fc054773b-kube-api-access-x6c9v\") pod \"machine-config-server-kzwl5\" (UID: \"54234ec6-6906-4706-8baf-839fc054773b\") " pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792033 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hb5d\" (UniqueName: \"kubernetes.io/projected/5075d111-78c5-40b6-8b8e-1e5ce57d943b-kube-api-access-7hb5d\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zzrz\" (UID: \"5075d111-78c5-40b6-8b8e-1e5ce57d943b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792055 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91dbd059-66ac-4a69-b40d-c444e771f9b1-serving-cert\") pod \"service-ca-operator-777779d784-wf9n2\" (UID: \"91dbd059-66ac-4a69-b40d-c444e771f9b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792682 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-oauth-config\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792733 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-serving-cert\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792765 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61c85f40-93cf-46d4-8a43-751ed991de0c-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792818 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792841 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-socket-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792863 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc37b9d5-4b67-4acc-bd4e-2f47257edff7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hzk8v\" (UID: \"fc37b9d5-4b67-4acc-bd4e-2f47257edff7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792881 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61c85f40-93cf-46d4-8a43-751ed991de0c-webhook-cert\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792915 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-audit-policies\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792939 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792961 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-config\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.792985 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-proxy-tls\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793053 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28hdv\" (UniqueName: \"kubernetes.io/projected/66159345-0259-49a9-a234-ce7520f5b6c6-kube-api-access-28hdv\") pod \"openshift-config-operator-7777fb866f-t4bkt\" (UID: \"66159345-0259-49a9-a234-ce7520f5b6c6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793080 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9120dc-89fa-43b6-b757-925e25598369-config\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793337 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/239ca8a8-a575-4ac6-a6c9-df5f9aed2db6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7xtnt\" (UID: \"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793372 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/03699c15-f204-4a9b-bf39-359c8734495d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p2r8s\" (UID: \"03699c15-f204-4a9b-bf39-359c8734495d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793405 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5075d111-78c5-40b6-8b8e-1e5ce57d943b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zzrz\" (UID: \"5075d111-78c5-40b6-8b8e-1e5ce57d943b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793446 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-default-certificate\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-serving-cert\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f43042e-2586-44f6-93f1-0a0284d35381-metrics-tls\") pod \"dns-default-fmmsk\" (UID: \"0f43042e-2586-44f6-93f1-0a0284d35381\") " pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793602 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f9120dc-89fa-43b6-b757-925e25598369-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793629 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd240135-deca-4bb9-907c-0fb3995a76a5-trusted-ca\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793671 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.793949 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-audit-policies\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.794950 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.796068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97145a00-5917-496b-8eaa-48da22c29d3d-audit-dir\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.797067 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-console-config\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.797316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-service-ca\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.797403 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-config\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.798551 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.798617 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f9120dc-89fa-43b6-b757-925e25598369-images\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: E0216 20:58:45.798794 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.298774415 +0000 UTC m=+144.117457710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.799060 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239ca8a8-a575-4ac6-a6c9-df5f9aed2db6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7xtnt\" (UID: \"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.799156 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-trusted-ca-bundle\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.799297 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.799880 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9120dc-89fa-43b6-b757-925e25598369-config\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.800325 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-tls\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.800386 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.800416 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbf84de5-d8b7-4f52-98ec-76d973dc290c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-btkhr\" (UID: \"bbf84de5-d8b7-4f52-98ec-76d973dc290c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.806029 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f7xj\" (UniqueName: \"kubernetes.io/projected/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-kube-api-access-7f7xj\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.806133 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26478\" (UniqueName: \"kubernetes.io/projected/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-kube-api-access-26478\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.805965 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.806267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-csi-data-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.806285 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7jqh\" (UniqueName: \"kubernetes.io/projected/a9ac0f09-69ad-444c-b827-cbb26c8623fb-kube-api-access-m7jqh\") pod \"marketplace-operator-79b997595-hkjs5\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.806653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrgpd\" (UniqueName: \"kubernetes.io/projected/91dbd059-66ac-4a69-b40d-c444e771f9b1-kube-api-access-vrgpd\") pod \"service-ca-operator-777779d784-wf9n2\" (UID: \"91dbd059-66ac-4a69-b40d-c444e771f9b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.806686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.806730 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hkjs5\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.806795 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-mountpoint-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.807079 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr8bj\" (UniqueName: \"kubernetes.io/projected/61c85f40-93cf-46d4-8a43-751ed991de0c-kube-api-access-jr8bj\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.807122 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-bound-sa-token\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.807142 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ebd63b-db13-4e5d-bc30-aaf3469daab4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvfgn\" (UID: \"a7ebd63b-db13-4e5d-bc30-aaf3469daab4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808043 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-certificates\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808176 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808250 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd240135-deca-4bb9-907c-0fb3995a76a5-config\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808373 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tnjs\" (UniqueName: \"kubernetes.io/projected/bbf84de5-d8b7-4f52-98ec-76d973dc290c-kube-api-access-7tnjs\") pod \"machine-config-controller-84d6567774-btkhr\" (UID: \"bbf84de5-d8b7-4f52-98ec-76d973dc290c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j7rx\" (UniqueName: \"kubernetes.io/projected/ebb1f28c-06bf-4127-beab-4339bcc3c559-kube-api-access-2j7rx\") pod \"service-ca-9c57cc56f-pp7j5\" (UID: \"ebb1f28c-06bf-4127-beab-4339bcc3c559\") " pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-plugins-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808538 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knmpl\" (UniqueName: \"kubernetes.io/projected/03699c15-f204-4a9b-bf39-359c8734495d-kube-api-access-knmpl\") pod \"olm-operator-6b444d44fb-p2r8s\" (UID: \"03699c15-f204-4a9b-bf39-359c8734495d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808570 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr9gl\" (UniqueName: \"kubernetes.io/projected/0f43042e-2586-44f6-93f1-0a0284d35381-kube-api-access-wr9gl\") pod \"dns-default-fmmsk\" (UID: \"0f43042e-2586-44f6-93f1-0a0284d35381\") " pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fe20d46-1095-4f1e-b29b-bdce644a87b5-srv-cert\") pod \"catalog-operator-68c6474976-r4brg\" (UID: \"7fe20d46-1095-4f1e-b29b-bdce644a87b5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-client-ca\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808758 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/03699c15-f204-4a9b-bf39-359c8734495d-srv-cert\") pod \"olm-operator-6b444d44fb-p2r8s\" (UID: \"03699c15-f204-4a9b-bf39-359c8734495d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808774 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8bzs\" (UniqueName: \"kubernetes.io/projected/fd240135-deca-4bb9-907c-0fb3995a76a5-kube-api-access-f8bzs\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc37b9d5-4b67-4acc-bd4e-2f47257edff7-config\") pod \"kube-apiserver-operator-766d6c64bb-hzk8v\" (UID: \"fc37b9d5-4b67-4acc-bd4e-2f47257edff7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z4ff\" (UniqueName: \"kubernetes.io/projected/e81b583e-8f61-44e8-b206-2e7b31ca3626-kube-api-access-4z4ff\") pod \"downloads-7954f5f757-2t9r2\" (UID: \"e81b583e-8f61-44e8-b206-2e7b31ca3626\") " pod="openshift-console/downloads-7954f5f757-2t9r2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808922 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a451d6a2-4e84-4838-89be-08a88869a68e-config-volume\") pod \"collect-profiles-29521245-h42wk\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.808984 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-oauth-serving-cert\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809004 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ebb1f28c-06bf-4127-beab-4339bcc3c559-signing-key\") pod \"service-ca-9c57cc56f-pp7j5\" (UID: \"ebb1f28c-06bf-4127-beab-4339bcc3c559\") " pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809066 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-service-ca-bundle\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809084 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ebb1f28c-06bf-4127-beab-4339bcc3c559-signing-cabundle\") pod \"service-ca-9c57cc56f-pp7j5\" (UID: \"ebb1f28c-06bf-4127-beab-4339bcc3c559\") " pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809101 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvhg\" (UniqueName: \"kubernetes.io/projected/7fe20d46-1095-4f1e-b29b-bdce644a87b5-kube-api-access-9nvhg\") pod \"catalog-operator-68c6474976-r4brg\" (UID: \"7fe20d46-1095-4f1e-b29b-bdce644a87b5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809145 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9fzl\" (UniqueName: \"kubernetes.io/projected/5f9120dc-89fa-43b6-b757-925e25598369-kube-api-access-c9fzl\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809169 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809187 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809236 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-registration-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809262 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxg5v\" (UniqueName: \"kubernetes.io/projected/e72fcc0c-766d-4396-a6a5-d69926db1197-kube-api-access-vxg5v\") pod \"migrator-59844c95c7-22q5q\" (UID: \"e72fcc0c-766d-4396-a6a5-d69926db1197\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809338 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n56s\" (UniqueName: \"kubernetes.io/projected/83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3-kube-api-access-7n56s\") pod \"package-server-manager-789f6589d5-b7jmg\" (UID: \"83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-trusted-ca\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809385 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dvhz\" (UniqueName: \"kubernetes.io/projected/2530eb64-2099-45e0-9727-ea9987f22ed5-kube-api-access-7dvhz\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809401 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc37b9d5-4b67-4acc-bd4e-2f47257edff7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hzk8v\" (UID: \"fc37b9d5-4b67-4acc-bd4e-2f47257edff7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809417 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r48j\" (UniqueName: \"kubernetes.io/projected/cf703159-340e-4a50-a7fa-4eb5402fabbf-kube-api-access-8r48j\") pod \"multus-admission-controller-857f4d67dd-l4tc4\" (UID: \"cf703159-340e-4a50-a7fa-4eb5402fabbf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809491 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-client-ca\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809968 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zhvq\" (UniqueName: \"kubernetes.io/projected/a7ebd63b-db13-4e5d-bc30-aaf3469daab4-kube-api-access-9zhvq\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvfgn\" (UID: \"a7ebd63b-db13-4e5d-bc30-aaf3469daab4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.811923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-trusted-ca\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.809096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-certificates\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.812192 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-oauth-serving-cert\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.812320 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.813308 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-service-ca-bundle\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.821574 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.821616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-metrics-certs\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.822045 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-serving-cert\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.823423 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-serving-cert\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.825885 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.825967 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/239ca8a8-a575-4ac6-a6c9-df5f9aed2db6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7xtnt\" (UID: \"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.826050 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.826468 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.826712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-oauth-config\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.826714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f9120dc-89fa-43b6-b757-925e25598369-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.826934 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-tls\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.827113 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.828142 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-default-certificate\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.828652 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-stats-auth\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.828827 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.829680 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.836133 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj5j4\" (UniqueName: \"kubernetes.io/projected/9ebe9ce6-6b40-435b-a14f-85a80c4ce52a-kube-api-access-xj5j4\") pod \"router-default-5444994796-fzjtf\" (UID: \"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a\") " pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.838172 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.838347 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.839009 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.841887 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns5n4\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-kube-api-access-ns5n4\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.858405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvgt9\" (UniqueName: \"kubernetes.io/projected/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-kube-api-access-hvgt9\") pod \"controller-manager-879f6c89f-45jmj\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.896325 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckqcd\" (UniqueName: \"kubernetes.io/projected/239ca8a8-a575-4ac6-a6c9-df5f9aed2db6-kube-api-access-ckqcd\") pod \"openshift-apiserver-operator-796bbdcf4f-7xtnt\" (UID: \"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.904227 4805 generic.go:334] "Generic (PLEG): container finished" podID="fc38c573-234c-4867-b170-adabd2bee815" containerID="50f42e56ebae50bae3de7365adc622b3ac9394d74721103d6367980b107e9cb7" exitCode=0 Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.904301 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-prjll" event={"ID":"fc38c573-234c-4867-b170-adabd2bee815","Type":"ContainerDied","Data":"50f42e56ebae50bae3de7365adc622b3ac9394d74721103d6367980b107e9cb7"} Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.907640 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911417 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911534 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knmpl\" (UniqueName: \"kubernetes.io/projected/03699c15-f204-4a9b-bf39-359c8734495d-kube-api-access-knmpl\") pod \"olm-operator-6b444d44fb-p2r8s\" (UID: \"03699c15-f204-4a9b-bf39-359c8734495d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911555 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr9gl\" (UniqueName: \"kubernetes.io/projected/0f43042e-2586-44f6-93f1-0a0284d35381-kube-api-access-wr9gl\") pod \"dns-default-fmmsk\" (UID: \"0f43042e-2586-44f6-93f1-0a0284d35381\") " pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911570 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/03699c15-f204-4a9b-bf39-359c8734495d-srv-cert\") pod \"olm-operator-6b444d44fb-p2r8s\" (UID: \"03699c15-f204-4a9b-bf39-359c8734495d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8bzs\" (UniqueName: \"kubernetes.io/projected/fd240135-deca-4bb9-907c-0fb3995a76a5-kube-api-access-f8bzs\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911602 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc37b9d5-4b67-4acc-bd4e-2f47257edff7-config\") pod \"kube-apiserver-operator-766d6c64bb-hzk8v\" (UID: \"fc37b9d5-4b67-4acc-bd4e-2f47257edff7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fe20d46-1095-4f1e-b29b-bdce644a87b5-srv-cert\") pod \"catalog-operator-68c6474976-r4brg\" (UID: \"7fe20d46-1095-4f1e-b29b-bdce644a87b5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a451d6a2-4e84-4838-89be-08a88869a68e-config-volume\") pod \"collect-profiles-29521245-h42wk\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911660 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ebb1f28c-06bf-4127-beab-4339bcc3c559-signing-key\") pod \"service-ca-9c57cc56f-pp7j5\" (UID: \"ebb1f28c-06bf-4127-beab-4339bcc3c559\") " pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvhg\" (UniqueName: \"kubernetes.io/projected/7fe20d46-1095-4f1e-b29b-bdce644a87b5-kube-api-access-9nvhg\") pod \"catalog-operator-68c6474976-r4brg\" (UID: \"7fe20d46-1095-4f1e-b29b-bdce644a87b5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ebb1f28c-06bf-4127-beab-4339bcc3c559-signing-cabundle\") pod \"service-ca-9c57cc56f-pp7j5\" (UID: \"ebb1f28c-06bf-4127-beab-4339bcc3c559\") " pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911821 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxg5v\" (UniqueName: \"kubernetes.io/projected/e72fcc0c-766d-4396-a6a5-d69926db1197-kube-api-access-vxg5v\") pod \"migrator-59844c95c7-22q5q\" (UID: \"e72fcc0c-766d-4396-a6a5-d69926db1197\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911843 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-registration-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911868 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n56s\" (UniqueName: \"kubernetes.io/projected/83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3-kube-api-access-7n56s\") pod \"package-server-manager-789f6589d5-b7jmg\" (UID: \"83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911891 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc37b9d5-4b67-4acc-bd4e-2f47257edff7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hzk8v\" (UID: \"fc37b9d5-4b67-4acc-bd4e-2f47257edff7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911906 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r48j\" (UniqueName: \"kubernetes.io/projected/cf703159-340e-4a50-a7fa-4eb5402fabbf-kube-api-access-8r48j\") pod \"multus-admission-controller-857f4d67dd-l4tc4\" (UID: \"cf703159-340e-4a50-a7fa-4eb5402fabbf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zhvq\" (UniqueName: \"kubernetes.io/projected/a7ebd63b-db13-4e5d-bc30-aaf3469daab4-kube-api-access-9zhvq\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvfgn\" (UID: \"a7ebd63b-db13-4e5d-bc30-aaf3469daab4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.911947 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4bb5\" (UniqueName: \"kubernetes.io/projected/298ba8df-baa6-4b79-b9dd-078f03b74975-kube-api-access-k4bb5\") pod \"ingress-canary-jvx2g\" (UID: \"298ba8df-baa6-4b79-b9dd-078f03b74975\") " pod="openshift-ingress-canary/ingress-canary-jvx2g" Feb 16 20:58:45 crc kubenswrapper[4805]: E0216 20:58:45.911995 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.411974171 +0000 UTC m=+144.230657456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912040 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/54234ec6-6906-4706-8baf-839fc054773b-certs\") pod \"machine-config-server-kzwl5\" (UID: \"54234ec6-6906-4706-8baf-839fc054773b\") " pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912061 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f43042e-2586-44f6-93f1-0a0284d35381-config-volume\") pod \"dns-default-fmmsk\" (UID: \"0f43042e-2586-44f6-93f1-0a0284d35381\") " pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-etcd-ca\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912107 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b7jmg\" (UID: \"83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912125 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a451d6a2-4e84-4838-89be-08a88869a68e-secret-volume\") pod \"collect-profiles-29521245-h42wk\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912149 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fe20d46-1095-4f1e-b29b-bdce644a87b5-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4brg\" (UID: \"7fe20d46-1095-4f1e-b29b-bdce644a87b5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912173 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912197 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91dbd059-66ac-4a69-b40d-c444e771f9b1-config\") pod \"service-ca-operator-777779d784-wf9n2\" (UID: \"91dbd059-66ac-4a69-b40d-c444e771f9b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912216 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbf84de5-d8b7-4f52-98ec-76d973dc290c-proxy-tls\") pod \"machine-config-controller-84d6567774-btkhr\" (UID: \"bbf84de5-d8b7-4f52-98ec-76d973dc290c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912235 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-images\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912253 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/54234ec6-6906-4706-8baf-839fc054773b-node-bootstrap-token\") pod \"machine-config-server-kzwl5\" (UID: \"54234ec6-6906-4706-8baf-839fc054773b\") " pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ebd63b-db13-4e5d-bc30-aaf3469daab4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvfgn\" (UID: \"a7ebd63b-db13-4e5d-bc30-aaf3469daab4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912282 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/298ba8df-baa6-4b79-b9dd-078f03b74975-cert\") pod \"ingress-canary-jvx2g\" (UID: \"298ba8df-baa6-4b79-b9dd-078f03b74975\") " pod="openshift-ingress-canary/ingress-canary-jvx2g" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912307 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hkjs5\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912328 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-etcd-service-ca\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l65v9\" (UniqueName: \"kubernetes.io/projected/18bc5c62-7469-4926-a3e2-fe9eb48844c8-kube-api-access-l65v9\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912385 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66159345-0259-49a9-a234-ce7520f5b6c6-serving-cert\") pod \"openshift-config-operator-7777fb866f-t4bkt\" (UID: \"66159345-0259-49a9-a234-ce7520f5b6c6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912402 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf703159-340e-4a50-a7fa-4eb5402fabbf-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l4tc4\" (UID: \"cf703159-340e-4a50-a7fa-4eb5402fabbf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912417 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7rsf\" (UniqueName: \"kubernetes.io/projected/a451d6a2-4e84-4838-89be-08a88869a68e-kube-api-access-m7rsf\") pod \"collect-profiles-29521245-h42wk\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-etcd-client\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912449 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd240135-deca-4bb9-907c-0fb3995a76a5-serving-cert\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/66159345-0259-49a9-a234-ce7520f5b6c6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-t4bkt\" (UID: \"66159345-0259-49a9-a234-ce7520f5b6c6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/61c85f40-93cf-46d4-8a43-751ed991de0c-tmpfs\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912515 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hb5d\" (UniqueName: \"kubernetes.io/projected/5075d111-78c5-40b6-8b8e-1e5ce57d943b-kube-api-access-7hb5d\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zzrz\" (UID: \"5075d111-78c5-40b6-8b8e-1e5ce57d943b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912530 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91dbd059-66ac-4a69-b40d-c444e771f9b1-serving-cert\") pod \"service-ca-operator-777779d784-wf9n2\" (UID: \"91dbd059-66ac-4a69-b40d-c444e771f9b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912546 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6c9v\" (UniqueName: \"kubernetes.io/projected/54234ec6-6906-4706-8baf-839fc054773b-kube-api-access-x6c9v\") pod \"machine-config-server-kzwl5\" (UID: \"54234ec6-6906-4706-8baf-839fc054773b\") " pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912559 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-serving-cert\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912572 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61c85f40-93cf-46d4-8a43-751ed991de0c-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912582 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc37b9d5-4b67-4acc-bd4e-2f47257edff7-config\") pod \"kube-apiserver-operator-766d6c64bb-hzk8v\" (UID: \"fc37b9d5-4b67-4acc-bd4e-2f47257edff7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-socket-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc37b9d5-4b67-4acc-bd4e-2f47257edff7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hzk8v\" (UID: \"fc37b9d5-4b67-4acc-bd4e-2f47257edff7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912660 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61c85f40-93cf-46d4-8a43-751ed991de0c-webhook-cert\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-config\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-proxy-tls\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912795 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28hdv\" (UniqueName: \"kubernetes.io/projected/66159345-0259-49a9-a234-ce7520f5b6c6-kube-api-access-28hdv\") pod \"openshift-config-operator-7777fb866f-t4bkt\" (UID: \"66159345-0259-49a9-a234-ce7520f5b6c6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-socket-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912817 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/03699c15-f204-4a9b-bf39-359c8734495d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p2r8s\" (UID: \"03699c15-f204-4a9b-bf39-359c8734495d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912846 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5075d111-78c5-40b6-8b8e-1e5ce57d943b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zzrz\" (UID: \"5075d111-78c5-40b6-8b8e-1e5ce57d943b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912868 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f43042e-2586-44f6-93f1-0a0284d35381-metrics-tls\") pod \"dns-default-fmmsk\" (UID: \"0f43042e-2586-44f6-93f1-0a0284d35381\") " pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912890 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd240135-deca-4bb9-907c-0fb3995a76a5-trusted-ca\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912919 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbf84de5-d8b7-4f52-98ec-76d973dc290c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-btkhr\" (UID: \"bbf84de5-d8b7-4f52-98ec-76d973dc290c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912943 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f7xj\" (UniqueName: \"kubernetes.io/projected/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-kube-api-access-7f7xj\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912967 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26478\" (UniqueName: \"kubernetes.io/projected/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-kube-api-access-26478\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.912990 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-csi-data-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913011 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7jqh\" (UniqueName: \"kubernetes.io/projected/a9ac0f09-69ad-444c-b827-cbb26c8623fb-kube-api-access-m7jqh\") pod \"marketplace-operator-79b997595-hkjs5\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913036 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrgpd\" (UniqueName: \"kubernetes.io/projected/91dbd059-66ac-4a69-b40d-c444e771f9b1-kube-api-access-vrgpd\") pod \"service-ca-operator-777779d784-wf9n2\" (UID: \"91dbd059-66ac-4a69-b40d-c444e771f9b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913058 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hkjs5\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913079 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-mountpoint-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913102 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr8bj\" (UniqueName: \"kubernetes.io/projected/61c85f40-93cf-46d4-8a43-751ed991de0c-kube-api-access-jr8bj\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913125 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ebd63b-db13-4e5d-bc30-aaf3469daab4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvfgn\" (UID: \"a7ebd63b-db13-4e5d-bc30-aaf3469daab4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913154 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd240135-deca-4bb9-907c-0fb3995a76a5-config\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913180 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tnjs\" (UniqueName: \"kubernetes.io/projected/bbf84de5-d8b7-4f52-98ec-76d973dc290c-kube-api-access-7tnjs\") pod \"machine-config-controller-84d6567774-btkhr\" (UID: \"bbf84de5-d8b7-4f52-98ec-76d973dc290c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913203 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j7rx\" (UniqueName: \"kubernetes.io/projected/ebb1f28c-06bf-4127-beab-4339bcc3c559-kube-api-access-2j7rx\") pod \"service-ca-9c57cc56f-pp7j5\" (UID: \"ebb1f28c-06bf-4127-beab-4339bcc3c559\") " pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913226 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-plugins-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913320 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-plugins-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.913378 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-registration-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.914194 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hkjs5\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.914434 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a451d6a2-4e84-4838-89be-08a88869a68e-config-volume\") pod \"collect-profiles-29521245-h42wk\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.915796 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/66159345-0259-49a9-a234-ce7520f5b6c6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-t4bkt\" (UID: \"66159345-0259-49a9-a234-ce7520f5b6c6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.916181 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/61c85f40-93cf-46d4-8a43-751ed991de0c-tmpfs\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.916144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f43042e-2586-44f6-93f1-0a0284d35381-config-volume\") pod \"dns-default-fmmsk\" (UID: \"0f43042e-2586-44f6-93f1-0a0284d35381\") " pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.916832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-csi-data-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.917144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-etcd-ca\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.917576 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-etcd-service-ca\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.917821 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" event={"ID":"7f5646d3-161a-4168-8793-6b7372b1fc9b","Type":"ContainerStarted","Data":"5982bc30c69befef54ab9b8d3fe2b22b7b94defafc7efc01148a7c604bef6c81"} Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.917883 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" event={"ID":"7f5646d3-161a-4168-8793-6b7372b1fc9b","Type":"ContainerStarted","Data":"b636550f5323934a7c2e71954a7481370d74878d369a0d99bd4422cde8d2f79a"} Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.918207 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dlsr\" (UniqueName: \"kubernetes.io/projected/97145a00-5917-496b-8eaa-48da22c29d3d-kube-api-access-9dlsr\") pod \"oauth-openshift-558db77b4-wwm8v\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.918530 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-images\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.918662 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ebd63b-db13-4e5d-bc30-aaf3469daab4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvfgn\" (UID: \"a7ebd63b-db13-4e5d-bc30-aaf3469daab4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.918732 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/18bc5c62-7469-4926-a3e2-fe9eb48844c8-mountpoint-dir\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.918769 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ebb1f28c-06bf-4127-beab-4339bcc3c559-signing-cabundle\") pod \"service-ca-9c57cc56f-pp7j5\" (UID: \"ebb1f28c-06bf-4127-beab-4339bcc3c559\") " pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.919237 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd240135-deca-4bb9-907c-0fb3995a76a5-config\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.920011 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-config\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.920359 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: E0216 20:58:45.920391 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.420364629 +0000 UTC m=+144.239048024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.921488 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbf84de5-d8b7-4f52-98ec-76d973dc290c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-btkhr\" (UID: \"bbf84de5-d8b7-4f52-98ec-76d973dc290c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.921762 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91dbd059-66ac-4a69-b40d-c444e771f9b1-config\") pod \"service-ca-operator-777779d784-wf9n2\" (UID: \"91dbd059-66ac-4a69-b40d-c444e771f9b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.922808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd240135-deca-4bb9-907c-0fb3995a76a5-trusted-ca\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.929428 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.935535 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd240135-deca-4bb9-907c-0fb3995a76a5-serving-cert\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.936686 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/298ba8df-baa6-4b79-b9dd-078f03b74975-cert\") pod \"ingress-canary-jvx2g\" (UID: \"298ba8df-baa6-4b79-b9dd-078f03b74975\") " pod="openshift-ingress-canary/ingress-canary-jvx2g" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.937236 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91dbd059-66ac-4a69-b40d-c444e771f9b1-serving-cert\") pod \"service-ca-operator-777779d784-wf9n2\" (UID: \"91dbd059-66ac-4a69-b40d-c444e771f9b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.937415 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/03699c15-f204-4a9b-bf39-359c8734495d-srv-cert\") pod \"olm-operator-6b444d44fb-p2r8s\" (UID: \"03699c15-f204-4a9b-bf39-359c8734495d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.937653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-serving-cert\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.938102 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fe20d46-1095-4f1e-b29b-bdce644a87b5-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4brg\" (UID: \"7fe20d46-1095-4f1e-b29b-bdce644a87b5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.938324 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5075d111-78c5-40b6-8b8e-1e5ce57d943b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zzrz\" (UID: \"5075d111-78c5-40b6-8b8e-1e5ce57d943b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.938384 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61c85f40-93cf-46d4-8a43-751ed991de0c-webhook-cert\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.938469 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fe20d46-1095-4f1e-b29b-bdce644a87b5-srv-cert\") pod \"catalog-operator-68c6474976-r4brg\" (UID: \"7fe20d46-1095-4f1e-b29b-bdce644a87b5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.938879 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hkjs5\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.938911 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc37b9d5-4b67-4acc-bd4e-2f47257edff7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hzk8v\" (UID: \"fc37b9d5-4b67-4acc-bd4e-2f47257edff7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.939232 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ebd63b-db13-4e5d-bc30-aaf3469daab4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvfgn\" (UID: \"a7ebd63b-db13-4e5d-bc30-aaf3469daab4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.939278 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-proxy-tls\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.939632 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f43042e-2586-44f6-93f1-0a0284d35381-metrics-tls\") pod \"dns-default-fmmsk\" (UID: \"0f43042e-2586-44f6-93f1-0a0284d35381\") " pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.939781 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-etcd-client\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.939896 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/54234ec6-6906-4706-8baf-839fc054773b-certs\") pod \"machine-config-server-kzwl5\" (UID: \"54234ec6-6906-4706-8baf-839fc054773b\") " pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.940337 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61c85f40-93cf-46d4-8a43-751ed991de0c-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.940652 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b7jmg\" (UID: \"83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.941324 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/54234ec6-6906-4706-8baf-839fc054773b-node-bootstrap-token\") pod \"machine-config-server-kzwl5\" (UID: \"54234ec6-6906-4706-8baf-839fc054773b\") " pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.941457 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbf84de5-d8b7-4f52-98ec-76d973dc290c-proxy-tls\") pod \"machine-config-controller-84d6567774-btkhr\" (UID: \"bbf84de5-d8b7-4f52-98ec-76d973dc290c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.941513 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ebb1f28c-06bf-4127-beab-4339bcc3c559-signing-key\") pod \"service-ca-9c57cc56f-pp7j5\" (UID: \"ebb1f28c-06bf-4127-beab-4339bcc3c559\") " pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.941859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf703159-340e-4a50-a7fa-4eb5402fabbf-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l4tc4\" (UID: \"cf703159-340e-4a50-a7fa-4eb5402fabbf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.942527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a451d6a2-4e84-4838-89be-08a88869a68e-secret-volume\") pod \"collect-profiles-29521245-h42wk\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.946558 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/03699c15-f204-4a9b-bf39-359c8734495d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p2r8s\" (UID: \"03699c15-f204-4a9b-bf39-359c8734495d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.952184 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66159345-0259-49a9-a234-ce7520f5b6c6-serving-cert\") pod \"openshift-config-operator-7777fb866f-t4bkt\" (UID: \"66159345-0259-49a9-a234-ce7520f5b6c6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.954363 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-bound-sa-token\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.959386 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z4ff\" (UniqueName: \"kubernetes.io/projected/e81b583e-8f61-44e8-b206-2e7b31ca3626-kube-api-access-4z4ff\") pod \"downloads-7954f5f757-2t9r2\" (UID: \"e81b583e-8f61-44e8-b206-2e7b31ca3626\") " pod="openshift-console/downloads-7954f5f757-2t9r2" Feb 16 20:58:45 crc kubenswrapper[4805]: I0216 20:58:45.980195 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dvhz\" (UniqueName: \"kubernetes.io/projected/2530eb64-2099-45e0-9727-ea9987f22ed5-kube-api-access-7dvhz\") pod \"console-f9d7485db-h2zb9\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.011931 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9fzl\" (UniqueName: \"kubernetes.io/projected/5f9120dc-89fa-43b6-b757-925e25598369-kube-api-access-c9fzl\") pod \"machine-api-operator-5694c8668f-r4r7b\" (UID: \"5f9120dc-89fa-43b6-b757-925e25598369\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.013958 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.014315 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.514302231 +0000 UTC m=+144.332985526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.039779 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8bzs\" (UniqueName: \"kubernetes.io/projected/fd240135-deca-4bb9-907c-0fb3995a76a5-kube-api-access-f8bzs\") pod \"console-operator-58897d9998-psbvs\" (UID: \"fd240135-deca-4bb9-907c-0fb3995a76a5\") " pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.050579 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.060566 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4bb5\" (UniqueName: \"kubernetes.io/projected/298ba8df-baa6-4b79-b9dd-078f03b74975-kube-api-access-k4bb5\") pod \"ingress-canary-jvx2g\" (UID: \"298ba8df-baa6-4b79-b9dd-078f03b74975\") " pod="openshift-ingress-canary/ingress-canary-jvx2g" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.074343 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.081048 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxg5v\" (UniqueName: \"kubernetes.io/projected/e72fcc0c-766d-4396-a6a5-d69926db1197-kube-api-access-vxg5v\") pod \"migrator-59844c95c7-22q5q\" (UID: \"e72fcc0c-766d-4396-a6a5-d69926db1197\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.104390 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.105141 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knmpl\" (UniqueName: \"kubernetes.io/projected/03699c15-f204-4a9b-bf39-359c8734495d-kube-api-access-knmpl\") pod \"olm-operator-6b444d44fb-p2r8s\" (UID: \"03699c15-f204-4a9b-bf39-359c8734495d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.114932 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.116711 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr9gl\" (UniqueName: \"kubernetes.io/projected/0f43042e-2586-44f6-93f1-0a0284d35381-kube-api-access-wr9gl\") pod \"dns-default-fmmsk\" (UID: \"0f43042e-2586-44f6-93f1-0a0284d35381\") " pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.116967 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.616951801 +0000 UTC m=+144.435635096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.147202 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.148402 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n56s\" (UniqueName: \"kubernetes.io/projected/83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3-kube-api-access-7n56s\") pod \"package-server-manager-789f6589d5-b7jmg\" (UID: \"83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.148873 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.159126 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.161811 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r48j\" (UniqueName: \"kubernetes.io/projected/cf703159-340e-4a50-a7fa-4eb5402fabbf-kube-api-access-8r48j\") pod \"multus-admission-controller-857f4d67dd-l4tc4\" (UID: \"cf703159-340e-4a50-a7fa-4eb5402fabbf\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.166125 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jvx2g" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.180154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zhvq\" (UniqueName: \"kubernetes.io/projected/a7ebd63b-db13-4e5d-bc30-aaf3469daab4-kube-api-access-9zhvq\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvfgn\" (UID: \"a7ebd63b-db13-4e5d-bc30-aaf3469daab4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.200763 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2t9r2" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.209865 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc37b9d5-4b67-4acc-bd4e-2f47257edff7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hzk8v\" (UID: \"fc37b9d5-4b67-4acc-bd4e-2f47257edff7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.216364 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.217090 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.717069711 +0000 UTC m=+144.535753006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.224591 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hb5d\" (UniqueName: \"kubernetes.io/projected/5075d111-78c5-40b6-8b8e-1e5ce57d943b-kube-api-access-7hb5d\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zzrz\" (UID: \"5075d111-78c5-40b6-8b8e-1e5ce57d943b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.242266 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.247935 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26478\" (UniqueName: \"kubernetes.io/projected/721818d2-9ed6-4791-8fd7-8e01a1bbbe10-kube-api-access-26478\") pod \"machine-config-operator-74547568cd-9jxpt\" (UID: \"721818d2-9ed6-4791-8fd7-8e01a1bbbe10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.257990 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.272848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7jqh\" (UniqueName: \"kubernetes.io/projected/a9ac0f09-69ad-444c-b827-cbb26c8623fb-kube-api-access-m7jqh\") pod \"marketplace-operator-79b997595-hkjs5\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:46 crc kubenswrapper[4805]: W0216 20:58:46.275938 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd04bc3d4_3e2d_489d_83ad_77893578d020.slice/crio-568b0263ad2cef6f9933fbd4d93be70c9f41e138f9671a6bdc2098f1d0378444 WatchSource:0}: Error finding container 568b0263ad2cef6f9933fbd4d93be70c9f41e138f9671a6bdc2098f1d0378444: Status 404 returned error can't find the container with id 568b0263ad2cef6f9933fbd4d93be70c9f41e138f9671a6bdc2098f1d0378444 Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.286032 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.290099 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrgpd\" (UniqueName: \"kubernetes.io/projected/91dbd059-66ac-4a69-b40d-c444e771f9b1-kube-api-access-vrgpd\") pod \"service-ca-operator-777779d784-wf9n2\" (UID: \"91dbd059-66ac-4a69-b40d-c444e771f9b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.300807 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.313941 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.320950 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.321287 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.821274281 +0000 UTC m=+144.639957576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.322392 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.326958 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.327597 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l65v9\" (UniqueName: \"kubernetes.io/projected/18bc5c62-7469-4926-a3e2-fe9eb48844c8-kube-api-access-l65v9\") pod \"csi-hostpathplugin-r5h2d\" (UID: \"18bc5c62-7469-4926-a3e2-fe9eb48844c8\") " pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.329929 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.338746 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvhg\" (UniqueName: \"kubernetes.io/projected/7fe20d46-1095-4f1e-b29b-bdce644a87b5-kube-api-access-9nvhg\") pod \"catalog-operator-68c6474976-r4brg\" (UID: \"7fe20d46-1095-4f1e-b29b-bdce644a87b5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.345504 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6c9v\" (UniqueName: \"kubernetes.io/projected/54234ec6-6906-4706-8baf-839fc054773b-kube-api-access-x6c9v\") pod \"machine-config-server-kzwl5\" (UID: \"54234ec6-6906-4706-8baf-839fc054773b\") " pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.345750 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.350229 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.355663 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.363599 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.365931 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tnjs\" (UniqueName: \"kubernetes.io/projected/bbf84de5-d8b7-4f52-98ec-76d973dc290c-kube-api-access-7tnjs\") pod \"machine-config-controller-84d6567774-btkhr\" (UID: \"bbf84de5-d8b7-4f52-98ec-76d973dc290c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.369834 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.373627 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.383132 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.386350 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr8bj\" (UniqueName: \"kubernetes.io/projected/61c85f40-93cf-46d4-8a43-751ed991de0c-kube-api-access-jr8bj\") pod \"packageserver-d55dfcdfc-k8v8x\" (UID: \"61c85f40-93cf-46d4-8a43-751ed991de0c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.394289 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.398227 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.401659 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j7rx\" (UniqueName: \"kubernetes.io/projected/ebb1f28c-06bf-4127-beab-4339bcc3c559-kube-api-access-2j7rx\") pod \"service-ca-9c57cc56f-pp7j5\" (UID: \"ebb1f28c-06bf-4127-beab-4339bcc3c559\") " pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.405193 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.416431 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.426031 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.430233 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kzwl5" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.431210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.432586 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:46.932563446 +0000 UTC m=+144.751246741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.437145 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7rsf\" (UniqueName: \"kubernetes.io/projected/a451d6a2-4e84-4838-89be-08a88869a68e-kube-api-access-m7rsf\") pod \"collect-profiles-29521245-h42wk\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.445290 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28hdv\" (UniqueName: \"kubernetes.io/projected/66159345-0259-49a9-a234-ce7520f5b6c6-kube-api-access-28hdv\") pod \"openshift-config-operator-7777fb866f-t4bkt\" (UID: \"66159345-0259-49a9-a234-ce7520f5b6c6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:46 crc kubenswrapper[4805]: W0216 20:58:46.446943 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fea5ff9_3c84_499c_aca5_f7af4320a677.slice/crio-6cc6d56beb70d66f7d82e09b98c83efbf4e725d69ad0a76461aa6239df53d80f WatchSource:0}: Error finding container 6cc6d56beb70d66f7d82e09b98c83efbf4e725d69ad0a76461aa6239df53d80f: Status 404 returned error can't find the container with id 6cc6d56beb70d66f7d82e09b98c83efbf4e725d69ad0a76461aa6239df53d80f Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.452247 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.459617 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.470226 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f7xj\" (UniqueName: \"kubernetes.io/projected/58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40-kube-api-access-7f7xj\") pod \"etcd-operator-b45778765-hkpgz\" (UID: \"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.496635 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wwm8v"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.536338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.536760 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.036743306 +0000 UTC m=+144.855426601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.564160 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs"] Feb 16 20:58:46 crc kubenswrapper[4805]: W0216 20:58:46.574636 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97145a00_5917_496b_8eaa_48da22c29d3d.slice/crio-31e5d2fbbb551aa218d6d4e226fe26fc9ddde4586084b9b6b7c3e8c1a7199fab WatchSource:0}: Error finding container 31e5d2fbbb551aa218d6d4e226fe26fc9ddde4586084b9b6b7c3e8c1a7199fab: Status 404 returned error can't find the container with id 31e5d2fbbb551aa218d6d4e226fe26fc9ddde4586084b9b6b7c3e8c1a7199fab Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.575439 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87pxp"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.577569 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt"] Feb 16 20:58:46 crc kubenswrapper[4805]: W0216 20:58:46.592071 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bc5d8e8_00a8_4927_9bcc_2b7d3083a93d.slice/crio-250cde3f731cfaa38a58cf0114fa318f7a129dbcc9374bc9f64116a9e99495a7 WatchSource:0}: Error finding container 250cde3f731cfaa38a58cf0114fa318f7a129dbcc9374bc9f64116a9e99495a7: Status 404 returned error can't find the container with id 250cde3f731cfaa38a58cf0114fa318f7a129dbcc9374bc9f64116a9e99495a7 Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.592303 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.611285 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.623627 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h2zb9"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.623860 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-45jmj"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.645097 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.645389 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.145354607 +0000 UTC m=+144.964037902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.645935 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.646414 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.146398396 +0000 UTC m=+144.965081691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.677456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:46 crc kubenswrapper[4805]: W0216 20:58:46.678403 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54234ec6_6906_4706_8baf_839fc054773b.slice/crio-c41a3710cf3fbb3c0d5d54752030fe335f28734119547098638f2be6c1d07e1a WatchSource:0}: Error finding container c41a3710cf3fbb3c0d5d54752030fe335f28734119547098638f2be6c1d07e1a: Status 404 returned error can't find the container with id c41a3710cf3fbb3c0d5d54752030fe335f28734119547098638f2be6c1d07e1a Feb 16 20:58:46 crc kubenswrapper[4805]: W0216 20:58:46.680795 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2530eb64_2099_45e0_9727_ea9987f22ed5.slice/crio-8b0e680a639c58bcd5a1da563e304dc420950f6a802277531e8aad68e3eaa86a WatchSource:0}: Error finding container 8b0e680a639c58bcd5a1da563e304dc420950f6a802277531e8aad68e3eaa86a: Status 404 returned error can't find the container with id 8b0e680a639c58bcd5a1da563e304dc420950f6a802277531e8aad68e3eaa86a Feb 16 20:58:46 crc kubenswrapper[4805]: W0216 20:58:46.722371 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod239ca8a8_a575_4ac6_a6c9_df5f9aed2db6.slice/crio-bdc02e1a22cd8d97782ac9d21cd056b67f4048c4ec8b2b0dc421c1a91e30f4cf WatchSource:0}: Error finding container bdc02e1a22cd8d97782ac9d21cd056b67f4048c4ec8b2b0dc421c1a91e30f4cf: Status 404 returned error can't find the container with id bdc02e1a22cd8d97782ac9d21cd056b67f4048c4ec8b2b0dc421c1a91e30f4cf Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.746446 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.747332 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.247316228 +0000 UTC m=+145.065999523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.759842 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fmmsk"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.773521 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jvx2g"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.849266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.849602 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.349590466 +0000 UTC m=+145.168273761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.933900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" event={"ID":"cc45d729-38e4-4964-b5b6-de896f734fe8","Type":"ContainerStarted","Data":"72e2493fbf1e2b6a9f4b3e6c404cfdfdcda772800466c2d7c77c9c933e2e4792"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.933959 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" event={"ID":"cc45d729-38e4-4964-b5b6-de896f734fe8","Type":"ContainerStarted","Data":"75842bf86b823e8fb6596afb8be5a81f0a1f69ee6533670c082d795318525d59"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.935018 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.939224 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" event={"ID":"9f398957-312d-4861-8620-ea7ca65d9bc7","Type":"ContainerStarted","Data":"480c400748fdd222e211e7f1f8855fe223525a071bc8762243ca7cf13e1228ec"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.940009 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" event={"ID":"2fea5ff9-3c84-499c-aca5-f7af4320a677","Type":"ContainerStarted","Data":"6cc6d56beb70d66f7d82e09b98c83efbf4e725d69ad0a76461aa6239df53d80f"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.942178 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-prjll" event={"ID":"fc38c573-234c-4867-b170-adabd2bee815","Type":"ContainerStarted","Data":"42b44b7d94b79a73f21cf5024ac3ca4baa45e7b52a63c67aa10b4cf9c6cf80be"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.942984 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kzwl5" event={"ID":"54234ec6-6906-4706-8baf-839fc054773b","Type":"ContainerStarted","Data":"c41a3710cf3fbb3c0d5d54752030fe335f28734119547098638f2be6c1d07e1a"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.945813 4805 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gsn4v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.945857 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" podUID="cc45d729-38e4-4964-b5b6-de896f734fe8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.950336 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:46 crc kubenswrapper[4805]: E0216 20:58:46.950680 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.450666432 +0000 UTC m=+145.269349727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.952326 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r4r7b"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.976584 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" event={"ID":"97145a00-5917-496b-8eaa-48da22c29d3d","Type":"ContainerStarted","Data":"31e5d2fbbb551aa218d6d4e226fe26fc9ddde4586084b9b6b7c3e8c1a7199fab"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.979361 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" event={"ID":"ed3fdbaa-9dfc-4f4f-991e-1710c03738f4","Type":"ContainerStarted","Data":"a29110e129ab3bab2a6945c919845fc8331d0ee8c7977fa53c824008912e767e"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.979407 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" event={"ID":"ed3fdbaa-9dfc-4f4f-991e-1710c03738f4","Type":"ContainerStarted","Data":"2ef32872d77643a5ae1724f1b2c1c0a3d60bc6e22eb3245f1f92a0435a82c57f"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.980349 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" event={"ID":"a17754e1-d84d-4025-9398-2ad41d0f8da6","Type":"ContainerStarted","Data":"8b5bc82dda371c87b9dce4cf5d3704904962dac0865ee5e83a4e8e4201134640"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.985862 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2t9r2"] Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.988431 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" event={"ID":"d04bc3d4-3e2d-489d-83ad-77893578d020","Type":"ContainerStarted","Data":"279d59ec78bfa7228d6115f6ceb4affd56cf2eaf9eae17ddff9c38f8537802b2"} Feb 16 20:58:46 crc kubenswrapper[4805]: I0216 20:58:46.988472 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" event={"ID":"d04bc3d4-3e2d-489d-83ad-77893578d020","Type":"ContainerStarted","Data":"568b0263ad2cef6f9933fbd4d93be70c9f41e138f9671a6bdc2098f1d0378444"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.010513 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" event={"ID":"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d","Type":"ContainerStarted","Data":"250cde3f731cfaa38a58cf0114fa318f7a129dbcc9374bc9f64116a9e99495a7"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.012796 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.024690 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" event={"ID":"7f5646d3-161a-4168-8793-6b7372b1fc9b","Type":"ContainerStarted","Data":"d1f5788393a4a831b451e62a0dd6b1ec257f4073509d73aeace6002106f8d9ca"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.026689 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" event={"ID":"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de","Type":"ContainerStarted","Data":"4a54ee263569c625c9bee9b3f127b3fa8febe165d6f18a828ba66251de96e787"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.044062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fzjtf" event={"ID":"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a","Type":"ContainerStarted","Data":"2af28ce792410eda0a3be7526d23a03ae8e29b6367739ab5e16f05b9a8aa7104"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.044105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fzjtf" event={"ID":"9ebe9ce6-6b40-435b-a14f-85a80c4ce52a","Type":"ContainerStarted","Data":"ae0b7f2dc7e8d1da1aa47fec0a091f9d28e69e5964a307baca41d12f896dc525"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.048175 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" event={"ID":"ba323e22-bfbd-4b64-99ab-4695831c69a7","Type":"ContainerStarted","Data":"2d9435d295f1244952ffd3fd9d2d02105af5cc075f96bb7bc59a39e6a8dfd446"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.048575 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.051802 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.053258 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.553246179 +0000 UTC m=+145.371929474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.068103 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-psbvs"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.072868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" event={"ID":"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6","Type":"ContainerStarted","Data":"bdc02e1a22cd8d97782ac9d21cd056b67f4048c4ec8b2b0dc421c1a91e30f4cf"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.074679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" event={"ID":"37168f5d-63ef-497d-bbbf-06b677a39490","Type":"ContainerStarted","Data":"c551bc7f7404fbcfdec1dd5db3a0865c8fbdb78867542aa31afb41ae9bf8f2ce"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.076699 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h2zb9" event={"ID":"2530eb64-2099-45e0-9727-ea9987f22ed5","Type":"ContainerStarted","Data":"8b0e680a639c58bcd5a1da563e304dc420950f6a802277531e8aad68e3eaa86a"} Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.106611 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.114963 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz"] Feb 16 20:58:47 crc kubenswrapper[4805]: W0216 20:58:47.139199 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f43042e_2586_44f6_93f1_0a0284d35381.slice/crio-624777548d5567750959824547cc42317fee2513743c04601fff3c3e604e56a9 WatchSource:0}: Error finding container 624777548d5567750959824547cc42317fee2513743c04601fff3c3e604e56a9: Status 404 returned error can't find the container with id 624777548d5567750959824547cc42317fee2513743c04601fff3c3e604e56a9 Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.144489 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.152497 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.154132 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.654107369 +0000 UTC m=+145.472790664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.158207 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.160630 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.660616837 +0000 UTC m=+145.479300132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: W0216 20:58:47.175627 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod298ba8df_baa6_4b79_b9dd_078f03b74975.slice/crio-30ef696abc4582665e164f8c789337659ec8df36e9dfd0c8f493c74a619ad088 WatchSource:0}: Error finding container 30ef696abc4582665e164f8c789337659ec8df36e9dfd0c8f493c74a619ad088: Status 404 returned error can't find the container with id 30ef696abc4582665e164f8c789337659ec8df36e9dfd0c8f493c74a619ad088 Feb 16 20:58:47 crc kubenswrapper[4805]: W0216 20:58:47.177107 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f9120dc_89fa_43b6_b757_925e25598369.slice/crio-2621262c0ca3d680215063125258749add63c09cccda75d3da8fdaec8a7009fb WatchSource:0}: Error finding container 2621262c0ca3d680215063125258749add63c09cccda75d3da8fdaec8a7009fb: Status 404 returned error can't find the container with id 2621262c0ca3d680215063125258749add63c09cccda75d3da8fdaec8a7009fb Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.194680 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:47 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:47 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:47 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.194739 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.258850 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.259377 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.759357609 +0000 UTC m=+145.578040904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.261547 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-76qvc" podStartSLOduration=123.261531178 podStartE2EDuration="2m3.261531178s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:47.260271124 +0000 UTC m=+145.078954419" watchObservedRunningTime="2026-02-16 20:58:47.261531178 +0000 UTC m=+145.080214473" Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.288189 4805 csr.go:261] certificate signing request csr-ktbsm is approved, waiting to be issued Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.298415 4805 csr.go:257] certificate signing request csr-ktbsm is issued Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.364982 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.365300 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.865287387 +0000 UTC m=+145.683970682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.465932 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.466392 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:47.966372214 +0000 UTC m=+145.785055509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.550877 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l4tc4"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.561797 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.568049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.568349 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.068338044 +0000 UTC m=+145.887021339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.670935 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.671671 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.171656281 +0000 UTC m=+145.990339576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.709541 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.755201 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.773469 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.773815 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.273799277 +0000 UTC m=+146.092482572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.782236 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.814364 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hkjs5"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.833100 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pp7j5"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.873825 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r5h2d"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.873953 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.893824 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.919833 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn"] Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.920711 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.403069178 +0000 UTC m=+146.221752473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.926687 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.935192 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x"] Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.937389 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk"] Feb 16 20:58:47 crc kubenswrapper[4805]: W0216 20:58:47.973856 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7ebd63b_db13_4e5d_bc30_aaf3469daab4.slice/crio-d91f6a661cd5ceb4a4a85886db878c263be06dbae0233ec9aa55163d4ee385e8 WatchSource:0}: Error finding container d91f6a661cd5ceb4a4a85886db878c263be06dbae0233ec9aa55163d4ee385e8: Status 404 returned error can't find the container with id d91f6a661cd5ceb4a4a85886db878c263be06dbae0233ec9aa55163d4ee385e8 Feb 16 20:58:47 crc kubenswrapper[4805]: I0216 20:58:47.998043 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:47 crc kubenswrapper[4805]: E0216 20:58:47.998469 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.498433929 +0000 UTC m=+146.317117234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: W0216 20:58:48.010091 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb1f28c_06bf_4127_beab_4339bcc3c559.slice/crio-e1c7c5b261f415265d9d7ccea15b5385c31b72935da2379067bdcdc1e7b9655d WatchSource:0}: Error finding container e1c7c5b261f415265d9d7ccea15b5385c31b72935da2379067bdcdc1e7b9655d: Status 404 returned error can't find the container with id e1c7c5b261f415265d9d7ccea15b5385c31b72935da2379067bdcdc1e7b9655d Feb 16 20:58:48 crc kubenswrapper[4805]: W0216 20:58:48.027608 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18bc5c62_7469_4926_a3e2_fe9eb48844c8.slice/crio-218ff1b49e80473a1ee07ce9c03bcca52fd3a2f98e67729bed357dcdd1a7ec28 WatchSource:0}: Error finding container 218ff1b49e80473a1ee07ce9c03bcca52fd3a2f98e67729bed357dcdd1a7ec28: Status 404 returned error can't find the container with id 218ff1b49e80473a1ee07ce9c03bcca52fd3a2f98e67729bed357dcdd1a7ec28 Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.079271 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nh6q4" podStartSLOduration=124.079250915 podStartE2EDuration="2m4.079250915s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.039291429 +0000 UTC m=+145.857974724" watchObservedRunningTime="2026-02-16 20:58:48.079250915 +0000 UTC m=+145.897934210" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.080872 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hkpgz"] Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.085737 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cd6vq" podStartSLOduration=125.085703021 podStartE2EDuration="2m5.085703021s" podCreationTimestamp="2026-02-16 20:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.083614484 +0000 UTC m=+145.902297779" watchObservedRunningTime="2026-02-16 20:58:48.085703021 +0000 UTC m=+145.904386316" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.089607 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-psbvs" event={"ID":"fd240135-deca-4bb9-907c-0fb3995a76a5","Type":"ContainerStarted","Data":"869c9dcb890e7ae17f0fc698450599e4266cc95711030ab9a57588c6351529f1"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.098458 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q" event={"ID":"e72fcc0c-766d-4396-a6a5-d69926db1197","Type":"ContainerStarted","Data":"eed641430c2c5b07a6600a3745524795169e87ef7dc62a256dc49abc1bd4b8f5"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.098752 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.098891 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.598868748 +0000 UTC m=+146.417552053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.099030 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.099365 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.599350321 +0000 UTC m=+146.418033686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.111944 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:48 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:48 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:48 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.111999 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.113998 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" event={"ID":"18bc5c62-7469-4926-a3e2-fe9eb48844c8","Type":"ContainerStarted","Data":"218ff1b49e80473a1ee07ce9c03bcca52fd3a2f98e67729bed357dcdd1a7ec28"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.120090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fmmsk" event={"ID":"0f43042e-2586-44f6-93f1-0a0284d35381","Type":"ContainerStarted","Data":"624777548d5567750959824547cc42317fee2513743c04601fff3c3e604e56a9"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.120484 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" podStartSLOduration=123.120470945 podStartE2EDuration="2m3.120470945s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.119250683 +0000 UTC m=+145.937933978" watchObservedRunningTime="2026-02-16 20:58:48.120470945 +0000 UTC m=+145.939154240" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.123368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" event={"ID":"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de","Type":"ContainerStarted","Data":"858970c68d4a78819972ca85cd81e6f15f8862826331839ed56552b0885c5225"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.123619 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.134749 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-45jmj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.135003 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" podUID="e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.135999 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" event={"ID":"83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3","Type":"ContainerStarted","Data":"8f7435efc1739ff709047169c524916d83dbfac0e4dfc6efa1ea2d54fec5c9d0"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.138486 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" event={"ID":"239ca8a8-a575-4ac6-a6c9-df5f9aed2db6","Type":"ContainerStarted","Data":"b79253cef06e58424021c167d80015c4eb5d33f04e5db805dc181f48ee75fa94"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.140138 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" event={"ID":"91dbd059-66ac-4a69-b40d-c444e771f9b1","Type":"ContainerStarted","Data":"d20783be2eeb4204de4b29240068acf3a1a1ca4637b6af3dfb935f3b5cac7b58"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.144983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" event={"ID":"61c85f40-93cf-46d4-8a43-751ed991de0c","Type":"ContainerStarted","Data":"f9f01005d12b3a99d589108f7f68e39a872cef7835bac7cc8fba55d235fbeb25"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.145963 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2t9r2" event={"ID":"e81b583e-8f61-44e8-b206-2e7b31ca3626","Type":"ContainerStarted","Data":"3ae5a0efa74cc34d9d2155a9f22d64ab5989fb70a45a14c8b4c725a397c63678"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.162694 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fzjtf" podStartSLOduration=123.162676993 podStartE2EDuration="2m3.162676993s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.160801441 +0000 UTC m=+145.979484736" watchObservedRunningTime="2026-02-16 20:58:48.162676993 +0000 UTC m=+145.981360288" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.164523 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" event={"ID":"66159345-0259-49a9-a234-ce7520f5b6c6","Type":"ContainerStarted","Data":"0e6ba6ee8635824a5a244fba5a1a33aa44b45165388c9d9264a93ae256130e2e"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.166762 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" event={"ID":"bbf84de5-d8b7-4f52-98ec-76d973dc290c","Type":"ContainerStarted","Data":"34558ad292b3cacb5195f68d2cd786b2632b120817b9d3dc828eba222784f63b"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.169582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h2zb9" event={"ID":"2530eb64-2099-45e0-9727-ea9987f22ed5","Type":"ContainerStarted","Data":"3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.175760 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" event={"ID":"03699c15-f204-4a9b-bf39-359c8734495d","Type":"ContainerStarted","Data":"fa9351e9df978d3a5e6b517a6e77907d3e8191885f2324cfee498d7591b857fd"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.182698 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" event={"ID":"721818d2-9ed6-4791-8fd7-8e01a1bbbe10","Type":"ContainerStarted","Data":"0a7003d9c2585977bdbdcb4ff7a4145423a2e488a5c8839f4a8d2356324f8f16"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.194481 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xtnt" podStartSLOduration=124.194464055 podStartE2EDuration="2m4.194464055s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.194298461 +0000 UTC m=+146.012981756" watchObservedRunningTime="2026-02-16 20:58:48.194464055 +0000 UTC m=+146.013147350" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.200745 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.201427 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.701410004 +0000 UTC m=+146.520093299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.204293 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jvx2g" event={"ID":"298ba8df-baa6-4b79-b9dd-078f03b74975","Type":"ContainerStarted","Data":"45ab5111df9f79ffab082959e20801e6f42cc1ffb54a2585276efb80a8d008f5"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.204329 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jvx2g" event={"ID":"298ba8df-baa6-4b79-b9dd-078f03b74975","Type":"ContainerStarted","Data":"30ef696abc4582665e164f8c789337659ec8df36e9dfd0c8f493c74a619ad088"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.208500 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" event={"ID":"cf703159-340e-4a50-a7fa-4eb5402fabbf","Type":"ContainerStarted","Data":"7150830c6110c3447209ff32f4c5dd8acbb9fa9a3fe4d9a164ca600ce56a7e08"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.222076 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" event={"ID":"5f9120dc-89fa-43b6-b757-925e25598369","Type":"ContainerStarted","Data":"2621262c0ca3d680215063125258749add63c09cccda75d3da8fdaec8a7009fb"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.229426 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" event={"ID":"97145a00-5917-496b-8eaa-48da22c29d3d","Type":"ContainerStarted","Data":"e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.230334 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.240227 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" event={"ID":"ed3fdbaa-9dfc-4f4f-991e-1710c03738f4","Type":"ContainerStarted","Data":"9347ff41c2902b3f567b7ab73d2ae97b57668a3942bcd53d1b70293595f6aea9"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.245178 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" podStartSLOduration=124.245159914 podStartE2EDuration="2m4.245159914s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.243607221 +0000 UTC m=+146.062290526" watchObservedRunningTime="2026-02-16 20:58:48.245159914 +0000 UTC m=+146.063843219" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.246691 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wwm8v container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.246816 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" podUID="97145a00-5917-496b-8eaa-48da22c29d3d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.249936 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kzwl5" event={"ID":"54234ec6-6906-4706-8baf-839fc054773b","Type":"ContainerStarted","Data":"d5374c7d484b7044a376eda6a8dd2526b7b95ac508b754af960953be0d6234f9"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.250866 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" event={"ID":"a7ebd63b-db13-4e5d-bc30-aaf3469daab4","Type":"ContainerStarted","Data":"d91f6a661cd5ceb4a4a85886db878c263be06dbae0233ec9aa55163d4ee385e8"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.251667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" event={"ID":"5075d111-78c5-40b6-8b8e-1e5ce57d943b","Type":"ContainerStarted","Data":"5aa6120b0c3ea674e19c243ab67a40fec02ac99888804d6d2f1c829ba5286e5b"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.252452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" event={"ID":"a9ac0f09-69ad-444c-b827-cbb26c8623fb","Type":"ContainerStarted","Data":"931a4c5a61e00b78b44739d950781052ae7b2b3fcceb99fc462fc75dde66f984"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.265521 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" event={"ID":"2fea5ff9-3c84-499c-aca5-f7af4320a677","Type":"ContainerStarted","Data":"e8ec15563d7d7f6da4546187a7964d3cfb6788f20532f76c5122a06638364d6a"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.268971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" event={"ID":"ba323e22-bfbd-4b64-99ab-4695831c69a7","Type":"ContainerStarted","Data":"f87f637cb6295920cdb9a27304a3e8e6bf4dcad4b24cce8cbdc449bb8fa452a7"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.271489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" event={"ID":"fc37b9d5-4b67-4acc-bd4e-2f47257edff7","Type":"ContainerStarted","Data":"1f4743708a62ff028ae37e99fbdbe271cb67251faf65877bec6bfda3b3567c8d"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.274832 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jvx2g" podStartSLOduration=5.274783498 podStartE2EDuration="5.274783498s" podCreationTimestamp="2026-02-16 20:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.271773257 +0000 UTC m=+146.090456552" watchObservedRunningTime="2026-02-16 20:58:48.274783498 +0000 UTC m=+146.093466793" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.278695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" event={"ID":"7fe20d46-1095-4f1e-b29b-bdce644a87b5","Type":"ContainerStarted","Data":"ea9f9508e810ff88fde4f242b7b71552bb49135d9e5e9b707b7df110899e89a3"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.282604 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-prjll" event={"ID":"fc38c573-234c-4867-b170-adabd2bee815","Type":"ContainerStarted","Data":"e59b8047784be8931b5012bdef00d11cd5065621346898d3be2935d010238e19"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.287695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" event={"ID":"a451d6a2-4e84-4838-89be-08a88869a68e","Type":"ContainerStarted","Data":"ec5b82bf0b5957fbc906c78a6d7c579752205f2177531d461654628ad18d7e03"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.292977 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" event={"ID":"ebb1f28c-06bf-4127-beab-4339bcc3c559","Type":"ContainerStarted","Data":"e1c7c5b261f415265d9d7ccea15b5385c31b72935da2379067bdcdc1e7b9655d"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.297838 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" event={"ID":"a17754e1-d84d-4025-9398-2ad41d0f8da6","Type":"ContainerStarted","Data":"6aa408e7bd1d59d6b180046350963e47ee1cfe2857836ad4afc5b3d76012a79a"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.303706 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.304400 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.804386052 +0000 UTC m=+146.623069347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.304856 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 20:53:47 +0000 UTC, rotation deadline is 2026-12-26 22:16:21.784611445 +0000 UTC Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.305103 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7513h17m33.479513034s for next certificate rotation Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.325169 4805 generic.go:334] "Generic (PLEG): container finished" podID="2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d" containerID="4012089d0eecdfe7cd2b0e0d0d7c75f80715dd866fbfe5a73b522244ec23ba16" exitCode=0 Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.325332 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" event={"ID":"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d","Type":"ContainerDied","Data":"4012089d0eecdfe7cd2b0e0d0d7c75f80715dd866fbfe5a73b522244ec23ba16"} Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.325576 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-h2zb9" podStartSLOduration=124.325557968 podStartE2EDuration="2m4.325557968s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.325139457 +0000 UTC m=+146.143822752" watchObservedRunningTime="2026-02-16 20:58:48.325557968 +0000 UTC m=+146.144241263" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.338343 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.364811 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" podStartSLOduration=124.364793684 podStartE2EDuration="2m4.364793684s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.364211988 +0000 UTC m=+146.182895293" watchObservedRunningTime="2026-02-16 20:58:48.364793684 +0000 UTC m=+146.183476979" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.405256 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.407597 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:48.907576226 +0000 UTC m=+146.726259521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.457904 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsg2r" podStartSLOduration=124.457886732 podStartE2EDuration="2m4.457886732s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.39408842 +0000 UTC m=+146.212771715" watchObservedRunningTime="2026-02-16 20:58:48.457886732 +0000 UTC m=+146.276570027" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.479438 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jbblk" podStartSLOduration=123.479423238 podStartE2EDuration="2m3.479423238s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.47728473 +0000 UTC m=+146.295968025" watchObservedRunningTime="2026-02-16 20:58:48.479423238 +0000 UTC m=+146.298106523" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.508187 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.510511 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.010500322 +0000 UTC m=+146.829183617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.521159 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mvqf6" podStartSLOduration=124.521142942 podStartE2EDuration="2m4.521142942s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.5210664 +0000 UTC m=+146.339749715" watchObservedRunningTime="2026-02-16 20:58:48.521142942 +0000 UTC m=+146.339826237" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.604098 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-prjll" podStartSLOduration=124.604080425 podStartE2EDuration="2m4.604080425s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.558683241 +0000 UTC m=+146.377366536" watchObservedRunningTime="2026-02-16 20:58:48.604080425 +0000 UTC m=+146.422763720" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.606141 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" podStartSLOduration=123.60612789 podStartE2EDuration="2m3.60612789s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.602608115 +0000 UTC m=+146.421291420" watchObservedRunningTime="2026-02-16 20:58:48.60612789 +0000 UTC m=+146.424811185" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.613432 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.613763 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.113744858 +0000 UTC m=+146.932428153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.632759 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kzwl5" podStartSLOduration=5.632743573 podStartE2EDuration="5.632743573s" podCreationTimestamp="2026-02-16 20:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:48.632056475 +0000 UTC m=+146.450739770" watchObservedRunningTime="2026-02-16 20:58:48.632743573 +0000 UTC m=+146.451426868" Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.714709 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.715063 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.21505018 +0000 UTC m=+147.033733475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.816015 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.816168 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.316145186 +0000 UTC m=+147.134828481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.816459 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.816777 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.316768024 +0000 UTC m=+147.135451319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:48 crc kubenswrapper[4805]: I0216 20:58:48.917207 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:48 crc kubenswrapper[4805]: E0216 20:58:48.917528 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.417513621 +0000 UTC m=+147.236196916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.018687 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.019180 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.519164512 +0000 UTC m=+147.337847807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.113981 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:49 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:49 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:49 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.114049 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.123201 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.123566 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.623547719 +0000 UTC m=+147.442231014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.224491 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.224977 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.724961784 +0000 UTC m=+147.543645079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.326199 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.326414 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.82638854 +0000 UTC m=+147.645071835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.326622 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.327031 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.827022537 +0000 UTC m=+147.645705832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.355977 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" event={"ID":"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40","Type":"ContainerStarted","Data":"150b5f82b67766c819f779eabd086430827e69a484cabc170a18f962fb405978"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.362177 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" event={"ID":"721818d2-9ed6-4791-8fd7-8e01a1bbbe10","Type":"ContainerStarted","Data":"6b91b097319e20481f896374d02c775941b0f8230135ec9414fc221e09f8e121"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.364595 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q" event={"ID":"e72fcc0c-766d-4396-a6a5-d69926db1197","Type":"ContainerStarted","Data":"e84e666a6919e215b55116a220a2fb34a4e6173c53df5314bfc43b9ce2b4b188"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.364643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q" event={"ID":"e72fcc0c-766d-4396-a6a5-d69926db1197","Type":"ContainerStarted","Data":"41d0407afb2203294aad17ed5d46610f69ac0ab9b5640416b7245b424450dbfd"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.372461 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" event={"ID":"ba323e22-bfbd-4b64-99ab-4695831c69a7","Type":"ContainerStarted","Data":"2949f90a2093c6f3bc1749205bfb91b7165d3fe73f09ca901533dd927bb83c9d"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.381014 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" event={"ID":"fc37b9d5-4b67-4acc-bd4e-2f47257edff7","Type":"ContainerStarted","Data":"d5d73934008b2eac5e65b71ed9050f292e9b7bad95f45cfbbaed660d1733f515"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.391133 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvx8x" event={"ID":"9f398957-312d-4861-8620-ea7ca65d9bc7","Type":"ContainerStarted","Data":"4238a37c519cf5d8b27f9243e4dd5ae365dca96145df265b08ecf45ebb40e18d"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.401569 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" event={"ID":"91dbd059-66ac-4a69-b40d-c444e771f9b1","Type":"ContainerStarted","Data":"1d3a7f12efd662fa8d45bfad003c487212a9050bba113074e62ae2d4a30394a3"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.408850 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hzk8v" podStartSLOduration=124.40883281 podStartE2EDuration="2m4.40883281s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.407222675 +0000 UTC m=+147.225905970" watchObservedRunningTime="2026-02-16 20:58:49.40883281 +0000 UTC m=+147.227516105" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.418971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" event={"ID":"5075d111-78c5-40b6-8b8e-1e5ce57d943b","Type":"ContainerStarted","Data":"7dc8054a392ec380c392c2eaf699ef9d7453b7a09906389687a47b2bdb6a2da6"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.430143 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.430508 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:49.930493888 +0000 UTC m=+147.749177183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.437306 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" event={"ID":"a7ebd63b-db13-4e5d-bc30-aaf3469daab4","Type":"ContainerStarted","Data":"43eeac0af4c82f35666f9788ef2ac39c1764c7d5c2dc26265dc3d45d8449b047"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.441106 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wf9n2" podStartSLOduration=124.441083646 podStartE2EDuration="2m4.441083646s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.437068487 +0000 UTC m=+147.255751782" watchObservedRunningTime="2026-02-16 20:58:49.441083646 +0000 UTC m=+147.259766941" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.452818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" event={"ID":"a9ac0f09-69ad-444c-b827-cbb26c8623fb","Type":"ContainerStarted","Data":"5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.453762 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.465059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" event={"ID":"7fe20d46-1095-4f1e-b29b-bdce644a87b5","Type":"ContainerStarted","Data":"ab344c9983da1a1e14801c525e9b511ad3b32a6510911fc130b51f3816b323e5"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.465912 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.475433 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hkjs5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.475493 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" podUID="a9ac0f09-69ad-444c-b827-cbb26c8623fb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.475589 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.475834 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.481228 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4brg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.481541 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" podUID="7fe20d46-1095-4f1e-b29b-bdce644a87b5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.509458 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" event={"ID":"37168f5d-63ef-497d-bbbf-06b677a39490","Type":"ContainerStarted","Data":"410f1d52ff17092c80dcdce4a63ec79c7323cf41b9df81d946f55b8ecfd025f6"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.510129 4805 patch_prober.go:28] interesting pod/apiserver-76f77b778f-prjll container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]log ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]etcd ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/max-in-flight-filter ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 20:58:49 crc kubenswrapper[4805]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 20:58:49 crc kubenswrapper[4805]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-startinformers ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 20:58:49 crc kubenswrapper[4805]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 20:58:49 crc kubenswrapper[4805]: livez check failed Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.510161 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-prjll" podUID="fc38c573-234c-4867-b170-adabd2bee815" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.520816 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvfgn" podStartSLOduration=124.520799582 podStartE2EDuration="2m4.520799582s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.520036781 +0000 UTC m=+147.338720076" watchObservedRunningTime="2026-02-16 20:58:49.520799582 +0000 UTC m=+147.339482867" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.522063 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zzrz" podStartSLOduration=124.522056775 podStartE2EDuration="2m4.522056775s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.477153355 +0000 UTC m=+147.295836650" watchObservedRunningTime="2026-02-16 20:58:49.522056775 +0000 UTC m=+147.340740070" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.525572 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" event={"ID":"bbf84de5-d8b7-4f52-98ec-76d973dc290c","Type":"ContainerStarted","Data":"06aed5c1cd4bb5c5531ce5c419497a5f8ba076e150cd8890e925c605fe471d53"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.535648 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.540829 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.040814986 +0000 UTC m=+147.859498281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.541242 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2t9r2" event={"ID":"e81b583e-8f61-44e8-b206-2e7b31ca3626","Type":"ContainerStarted","Data":"3859f87e536593ec52274fe25e7505be6de5d90d3339013fec4d6573b1e734b1"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.541603 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2t9r2" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.551087 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" event={"ID":"5f9120dc-89fa-43b6-b757-925e25598369","Type":"ContainerStarted","Data":"2a21a38b92ca5a840e0cbb7fa39dc20f5ebdcf6a06cc5cca7dce55f6dc58fee9"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.553457 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-2t9r2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.553560 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2t9r2" podUID="e81b583e-8f61-44e8-b206-2e7b31ca3626" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.565392 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" event={"ID":"03699c15-f204-4a9b-bf39-359c8734495d","Type":"ContainerStarted","Data":"023014484a1c5d5bf2aeabd14230bb62fb979cf2f1b5bc3d034e23ff8ea8ca51"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.566138 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.574130 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" podStartSLOduration=124.57411093 podStartE2EDuration="2m4.57411093s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.554553719 +0000 UTC m=+147.373237024" watchObservedRunningTime="2026-02-16 20:58:49.57411093 +0000 UTC m=+147.392794225" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.580942 4805 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-p2r8s container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.580986 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" podUID="03699c15-f204-4a9b-bf39-359c8734495d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.593040 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fmmsk" event={"ID":"0f43042e-2586-44f6-93f1-0a0284d35381","Type":"ContainerStarted","Data":"5a6fb67a469ec1f645af4ae6ea081ea7d0a2736d2d729cff797a71ae645700b0"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.594326 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.598488 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" podStartSLOduration=124.598472772 podStartE2EDuration="2m4.598472772s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.573159444 +0000 UTC m=+147.391842739" watchObservedRunningTime="2026-02-16 20:58:49.598472772 +0000 UTC m=+147.417156067" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.611533 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-psbvs" event={"ID":"fd240135-deca-4bb9-907c-0fb3995a76a5","Type":"ContainerStarted","Data":"c4ded03b0d922419f34d8c702fdbd32d21c21f9e3a63d8424e07f51ef58df23b"} Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.615575 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.629072 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" podStartSLOduration=124.629052863 podStartE2EDuration="2m4.629052863s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.628662973 +0000 UTC m=+147.447346268" watchObservedRunningTime="2026-02-16 20:58:49.629052863 +0000 UTC m=+147.447736158" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.629591 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2t9r2" podStartSLOduration=125.629585897 podStartE2EDuration="2m5.629585897s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.59950446 +0000 UTC m=+147.418187755" watchObservedRunningTime="2026-02-16 20:58:49.629585897 +0000 UTC m=+147.448269192" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.636559 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.638806 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.138785367 +0000 UTC m=+147.957468662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.703241 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fmmsk" podStartSLOduration=6.703218048 podStartE2EDuration="6.703218048s" podCreationTimestamp="2026-02-16 20:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.661262578 +0000 UTC m=+147.479945873" watchObservedRunningTime="2026-02-16 20:58:49.703218048 +0000 UTC m=+147.521901343" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.743592 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.747479 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.247460719 +0000 UTC m=+148.066144104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.762690 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-psbvs" podStartSLOduration=125.762671833 podStartE2EDuration="2m5.762671833s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:49.759512527 +0000 UTC m=+147.578195832" watchObservedRunningTime="2026-02-16 20:58:49.762671833 +0000 UTC m=+147.581355128" Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.845560 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.846088 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.346069909 +0000 UTC m=+148.164753214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:49 crc kubenswrapper[4805]: I0216 20:58:49.947428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:49 crc kubenswrapper[4805]: E0216 20:58:49.947834 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.447817594 +0000 UTC m=+148.266500889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.049047 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.049229 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.549200908 +0000 UTC m=+148.367884203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.049841 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.050232 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.550215166 +0000 UTC m=+148.368898461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.110519 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:50 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:50 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:50 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.110592 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.150339 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.150594 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.650545621 +0000 UTC m=+148.469228926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.174356 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.251522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.251876 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.751864935 +0000 UTC m=+148.570548230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.353025 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.353106 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.853089295 +0000 UTC m=+148.671772590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.353177 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.353429 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.853421363 +0000 UTC m=+148.672104648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.453876 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.454024 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.954000896 +0000 UTC m=+148.772684201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.454190 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.454471 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:50.954462599 +0000 UTC m=+148.773145894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.554924 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.555104 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.055072122 +0000 UTC m=+148.873755407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.555387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.555713 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.05570582 +0000 UTC m=+148.874389105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.617904 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fmmsk" event={"ID":"0f43042e-2586-44f6-93f1-0a0284d35381","Type":"ContainerStarted","Data":"3038c89299f173f9e16d30be2859fa11a858461853162ecf27967908aa4edb1f"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.619534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" event={"ID":"a451d6a2-4e84-4838-89be-08a88869a68e","Type":"ContainerStarted","Data":"f2648cd9bb592c1d12ae53417781e41502d03104d2820858a4a54683fcb989b4"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.620770 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" event={"ID":"18bc5c62-7469-4926-a3e2-fe9eb48844c8","Type":"ContainerStarted","Data":"9490568a5f677336c920f1b4829b010972b8d4e628526ae7d5c9525b8b9a6b49"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.622524 4805 generic.go:334] "Generic (PLEG): container finished" podID="66159345-0259-49a9-a234-ce7520f5b6c6" containerID="96714de158cc9d1bbc0d036263b924c2f6e943e2f0ba54fa7cff15d7425915c2" exitCode=0 Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.622578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" event={"ID":"66159345-0259-49a9-a234-ce7520f5b6c6","Type":"ContainerDied","Data":"96714de158cc9d1bbc0d036263b924c2f6e943e2f0ba54fa7cff15d7425915c2"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.638370 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" event={"ID":"58ebf1a4-7ae3-4ae0-aa48-467a3c4f3c40","Type":"ContainerStarted","Data":"c8f52784d2bb80d058f77deec236a5b8d5ba529861efe04577a94e6e4d7e153b"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.644321 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" event={"ID":"721818d2-9ed6-4791-8fd7-8e01a1bbbe10","Type":"ContainerStarted","Data":"11f4c41e2f1a3bf8769bbcdc5c277a32f6179007b8b9e89619e6073315167232"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.652710 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" podStartSLOduration=125.652692085 podStartE2EDuration="2m5.652692085s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.647216696 +0000 UTC m=+148.465899991" watchObservedRunningTime="2026-02-16 20:58:50.652692085 +0000 UTC m=+148.471375380" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.654805 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" event={"ID":"bbf84de5-d8b7-4f52-98ec-76d973dc290c","Type":"ContainerStarted","Data":"1329072cf50c99443d6d15e1b8af10c8e1ddbefcaf11a594dd927f655dc154ae"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.658657 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.659934 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.159913181 +0000 UTC m=+148.978596476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.675360 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" event={"ID":"83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3","Type":"ContainerStarted","Data":"75066d5a47e1063ce84d63215214b78a974075be5fca57b742bb867cdd015acd"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.675442 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" event={"ID":"83b2c64c-1a8e-4be7-87f6-3e4a00ff56a3","Type":"ContainerStarted","Data":"6156a9c2e8fbee3de364fca5e7b7d171768f203cb219f1ae897e990ecf901eb7"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.675729 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.686767 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" event={"ID":"5f9120dc-89fa-43b6-b757-925e25598369","Type":"ContainerStarted","Data":"b3868c9119d57375808f058713470823a4bdb4c014593083afc6a0b9f16fdd3b"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.692409 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" event={"ID":"37168f5d-63ef-497d-bbbf-06b677a39490","Type":"ContainerStarted","Data":"30f3743db0922fbbd4911de75f2110026caacb0df124ec3cafc076917872bd04"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.695823 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" event={"ID":"61c85f40-93cf-46d4-8a43-751ed991de0c","Type":"ContainerStarted","Data":"815ef21baaa9181e5cb7ec60806d7c3df1603aee1b7e8674209389a3e01c5371"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.697009 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.699058 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8v8x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.699125 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" podUID="61c85f40-93cf-46d4-8a43-751ed991de0c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.709813 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" event={"ID":"ebb1f28c-06bf-4127-beab-4339bcc3c559","Type":"ContainerStarted","Data":"e1db62a3ec52443acccdc376d3e9d8dbea35db6aa6f73d28984d1ffeb2e32193"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.715682 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9jxpt" podStartSLOduration=125.715663926 podStartE2EDuration="2m5.715663926s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.714092343 +0000 UTC m=+148.532775638" watchObservedRunningTime="2026-02-16 20:58:50.715663926 +0000 UTC m=+148.534347221" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.725893 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" event={"ID":"cf703159-340e-4a50-a7fa-4eb5402fabbf","Type":"ContainerStarted","Data":"069cc26f30b0876919ec84cf4bff9d5d2cdf4d3aa44ba46f83e225b07c6b68c7"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.725938 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" event={"ID":"cf703159-340e-4a50-a7fa-4eb5402fabbf","Type":"ContainerStarted","Data":"7eed95066c4d34e5eef55e67618363d1a93be0a32cf9c3327585b228d799ceb6"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.730446 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" event={"ID":"2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d","Type":"ContainerStarted","Data":"9d8c57bfeb2fe84ae2b210379be22bc56a19446af98031469416f106b0f13e9b"} Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.731018 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4brg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.731059 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" podUID="7fe20d46-1095-4f1e-b29b-bdce644a87b5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.731477 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hkjs5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.731511 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" podUID="a9ac0f09-69ad-444c-b827-cbb26c8623fb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.731947 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-2t9r2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.731974 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2t9r2" podUID="e81b583e-8f61-44e8-b206-2e7b31ca3626" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.732633 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-psbvs container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.732666 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-psbvs" podUID="fd240135-deca-4bb9-907c-0fb3995a76a5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.733899 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.750835 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p2r8s" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.763347 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.763742 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.263714271 +0000 UTC m=+149.082397556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.784132 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.784467 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.792109 4805 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-bw2cs container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.792167 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" podUID="2bc5d8e8-00a8-4927-9bcc-2b7d3083a93d" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.849520 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hkpgz" podStartSLOduration=126.849502002 podStartE2EDuration="2m6.849502002s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.763895646 +0000 UTC m=+148.582578941" watchObservedRunningTime="2026-02-16 20:58:50.849502002 +0000 UTC m=+148.668185287" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.865075 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.865264 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.365218759 +0000 UTC m=+149.183902044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.865409 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.870005 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.369992148 +0000 UTC m=+149.188675443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.911131 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-22q5q" podStartSLOduration=125.911112566 podStartE2EDuration="2m5.911112566s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.871967843 +0000 UTC m=+148.690651138" watchObservedRunningTime="2026-02-16 20:58:50.911112566 +0000 UTC m=+148.729795861" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.966736 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" podStartSLOduration=125.966698426 podStartE2EDuration="2m5.966698426s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.932792065 +0000 UTC m=+148.751475370" watchObservedRunningTime="2026-02-16 20:58:50.966698426 +0000 UTC m=+148.785381711" Feb 16 20:58:50 crc kubenswrapper[4805]: I0216 20:58:50.969471 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:50 crc kubenswrapper[4805]: E0216 20:58:50.969875 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.469860612 +0000 UTC m=+149.288543907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.000967 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-btkhr" podStartSLOduration=126.000946527 podStartE2EDuration="2m6.000946527s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:51.000001861 +0000 UTC m=+148.818685166" watchObservedRunningTime="2026-02-16 20:58:51.000946527 +0000 UTC m=+148.819629822" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.002044 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" podStartSLOduration=126.002035237 podStartE2EDuration="2m6.002035237s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:50.969107312 +0000 UTC m=+148.787790607" watchObservedRunningTime="2026-02-16 20:58:51.002035237 +0000 UTC m=+148.820718532" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.020803 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4tc4" podStartSLOduration=126.020787716 podStartE2EDuration="2m6.020787716s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:51.019061879 +0000 UTC m=+148.837745184" watchObservedRunningTime="2026-02-16 20:58:51.020787716 +0000 UTC m=+148.839471011" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.073461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.073803 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.573790186 +0000 UTC m=+149.392473481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.094581 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fps5r" podStartSLOduration=127.09455904 podStartE2EDuration="2m7.09455904s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:51.092453552 +0000 UTC m=+148.911136847" watchObservedRunningTime="2026-02-16 20:58:51.09455904 +0000 UTC m=+148.913242335" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.094798 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-87pxp" podStartSLOduration=127.094792386 podStartE2EDuration="2m7.094792386s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:51.06104648 +0000 UTC m=+148.879729775" watchObservedRunningTime="2026-02-16 20:58:51.094792386 +0000 UTC m=+148.913475681" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.118868 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:51 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:51 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:51 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.118923 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.130782 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4r7b" podStartSLOduration=126.130762303 podStartE2EDuration="2m6.130762303s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:51.130576378 +0000 UTC m=+148.949259683" watchObservedRunningTime="2026-02-16 20:58:51.130762303 +0000 UTC m=+148.949445598" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.162637 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pp7j5" podStartSLOduration=126.162618249 podStartE2EDuration="2m6.162618249s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:51.161000565 +0000 UTC m=+148.979683860" watchObservedRunningTime="2026-02-16 20:58:51.162618249 +0000 UTC m=+148.981301534" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.173984 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.174319 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.674304537 +0000 UTC m=+149.492987832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.194594 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" podStartSLOduration=126.194577707 podStartE2EDuration="2m6.194577707s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:51.194277399 +0000 UTC m=+149.012960694" watchObservedRunningTime="2026-02-16 20:58:51.194577707 +0000 UTC m=+149.013261002" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.275186 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.275645 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.77562953 +0000 UTC m=+149.594312825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.376474 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.376618 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.876598133 +0000 UTC m=+149.695281428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.376856 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.377127 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.877119637 +0000 UTC m=+149.695802932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.478250 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.478468 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.97843747 +0000 UTC m=+149.797120765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.478664 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.479065 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:51.979048006 +0000 UTC m=+149.797731301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.580789 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.581437 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.081421207 +0000 UTC m=+149.900104492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.682242 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.682596 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.182580207 +0000 UTC m=+150.001263502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.737735 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" event={"ID":"66159345-0259-49a9-a234-ce7520f5b6c6","Type":"ContainerStarted","Data":"10d1fe31e63f903c846ceeb07d48a0b91868ffc3aeb9e57d7f7be9b6036949f0"} Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.738582 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-psbvs container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.738627 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-psbvs" podUID="fd240135-deca-4bb9-907c-0fb3995a76a5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.738861 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hkjs5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.738921 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" podUID="a9ac0f09-69ad-444c-b827-cbb26c8623fb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.739255 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-2t9r2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.739314 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2t9r2" podUID="e81b583e-8f61-44e8-b206-2e7b31ca3626" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.783581 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.783756 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.283739735 +0000 UTC m=+150.102423020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.784408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.784758 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.784796 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.784967 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.785054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.787050 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.287037154 +0000 UTC m=+150.105720449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.792349 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.792427 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.793859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.796077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.810658 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" podStartSLOduration=127.810640545 podStartE2EDuration="2m7.810640545s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:51.799683168 +0000 UTC m=+149.618366463" watchObservedRunningTime="2026-02-16 20:58:51.810640545 +0000 UTC m=+149.629323840" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.885686 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.885848 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.385823839 +0000 UTC m=+150.204507134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.931624 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.942726 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 20:58:51 crc kubenswrapper[4805]: I0216 20:58:51.988390 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:51 crc kubenswrapper[4805]: E0216 20:58:51.988657 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.488644052 +0000 UTC m=+150.307327347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.013757 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.089310 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.089497 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.589470731 +0000 UTC m=+150.408154036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.089615 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.089953 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.589943405 +0000 UTC m=+150.408626700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.113178 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:52 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:52 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:52 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.113245 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.190375 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.190689 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.6906735 +0000 UTC m=+150.509356795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.291558 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.292129 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.792116837 +0000 UTC m=+150.610800132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.397378 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.397605 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:52.897591153 +0000 UTC m=+150.716274448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: W0216 20:58:52.486882 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-4ca920bcbd3d151254eed5f74e6c1760913f703e30ca09a332420d0428667c29 WatchSource:0}: Error finding container 4ca920bcbd3d151254eed5f74e6c1760913f703e30ca09a332420d0428667c29: Status 404 returned error can't find the container with id 4ca920bcbd3d151254eed5f74e6c1760913f703e30ca09a332420d0428667c29 Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.500388 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.500644 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.000632483 +0000 UTC m=+150.819315778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.592742 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.603096 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.603693 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.103638051 +0000 UTC m=+150.922321346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.704136 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.704632 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.204615895 +0000 UTC m=+151.023299180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.738300 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8v8x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.738352 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" podUID="61c85f40-93cf-46d4-8a43-751ed991de0c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.745922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c90110841e61c2b315a8eba21c5fd2260c681f30cff9658bd32ed819cacf10cf"} Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.745980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4ca920bcbd3d151254eed5f74e6c1760913f703e30ca09a332420d0428667c29"} Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.753804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e7f8fc12f19df944537f07e52f6e2a75568b060a874d302198a71412f300f382"} Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.756772 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"529ba9c8cdf3a1791638a50b1e5ef0ffef4e2876744173766c83baefa4e32e13"} Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.805520 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.805612 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.305593739 +0000 UTC m=+151.124277034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.805994 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.807275 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.307255874 +0000 UTC m=+151.125939239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:52 crc kubenswrapper[4805]: I0216 20:58:52.907465 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:52 crc kubenswrapper[4805]: E0216 20:58:52.908036 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.408016281 +0000 UTC m=+151.226699576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.008740 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.009067 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.509053886 +0000 UTC m=+151.327737181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.094276 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bnj78"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.095462 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.097869 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.109565 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.109953 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.609935127 +0000 UTC m=+151.428618422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.115576 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnj78"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.116493 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:53 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:53 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:53 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.116531 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.190552 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.191396 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.196663 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.196930 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.211528 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.211567 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-utilities\") pod \"community-operators-bnj78\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.211595 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.211625 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl9hq\" (UniqueName: \"kubernetes.io/projected/2acb9625-6b32-480b-9f3d-97976930c437-kube-api-access-hl9hq\") pod \"community-operators-bnj78\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.211640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.211658 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-catalog-content\") pod \"community-operators-bnj78\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.211932 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.711921188 +0000 UTC m=+151.530604483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.213515 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.226841 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8v8x" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.279199 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ql9vs"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.280078 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.289213 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.301007 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ql9vs"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.312176 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.312432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.312481 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.312631 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.812616324 +0000 UTC m=+151.631299619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.312660 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-utilities\") pod \"community-operators-bnj78\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.312690 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.312732 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl9hq\" (UniqueName: \"kubernetes.io/projected/2acb9625-6b32-480b-9f3d-97976930c437-kube-api-access-hl9hq\") pod \"community-operators-bnj78\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.312748 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.312764 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-catalog-content\") pod \"community-operators-bnj78\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.313142 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-catalog-content\") pod \"community-operators-bnj78\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.313664 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-utilities\") pod \"community-operators-bnj78\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.315347 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.815330218 +0000 UTC m=+151.634013513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.342338 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.359566 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl9hq\" (UniqueName: \"kubernetes.io/projected/2acb9625-6b32-480b-9f3d-97976930c437-kube-api-access-hl9hq\") pod \"community-operators-bnj78\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.409435 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.414275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.414550 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-catalog-content\") pod \"certified-operators-ql9vs\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.414614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v48l7\" (UniqueName: \"kubernetes.io/projected/2dccaada-bb80-4a57-b9f2-5b190830fc87-kube-api-access-v48l7\") pod \"certified-operators-ql9vs\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.414686 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-utilities\") pod \"certified-operators-ql9vs\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.414830 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:53.914813141 +0000 UTC m=+151.733496446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.489704 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fnfbv"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.490923 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.508480 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnfbv"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.515410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v48l7\" (UniqueName: \"kubernetes.io/projected/2dccaada-bb80-4a57-b9f2-5b190830fc87-kube-api-access-v48l7\") pod \"certified-operators-ql9vs\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.515466 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-utilities\") pod \"certified-operators-ql9vs\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.515519 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.515537 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-catalog-content\") pod \"certified-operators-ql9vs\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.515935 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-catalog-content\") pod \"certified-operators-ql9vs\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.516355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-utilities\") pod \"certified-operators-ql9vs\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.516577 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.016566875 +0000 UTC m=+151.835250170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.535474 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v48l7\" (UniqueName: \"kubernetes.io/projected/2dccaada-bb80-4a57-b9f2-5b190830fc87-kube-api-access-v48l7\") pod \"certified-operators-ql9vs\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.536978 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.604033 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.617555 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.617780 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-utilities\") pod \"community-operators-fnfbv\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.617842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9gd7\" (UniqueName: \"kubernetes.io/projected/db65d55c-51e3-4303-b819-3f92da3814d9-kube-api-access-k9gd7\") pod \"community-operators-fnfbv\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.617864 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-catalog-content\") pod \"community-operators-fnfbv\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.617975 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.11796268 +0000 UTC m=+151.936645975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.693124 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8zllg"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.694101 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.712920 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8zllg"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.720819 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.720897 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-utilities\") pod \"community-operators-fnfbv\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.720968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9gd7\" (UniqueName: \"kubernetes.io/projected/db65d55c-51e3-4303-b819-3f92da3814d9-kube-api-access-k9gd7\") pod \"community-operators-fnfbv\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.721002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-catalog-content\") pod \"community-operators-fnfbv\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.721393 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.2213804 +0000 UTC m=+152.040063695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.721852 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-utilities\") pod \"community-operators-fnfbv\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.722277 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-catalog-content\") pod \"community-operators-fnfbv\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.757403 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnj78"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.760454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9gd7\" (UniqueName: \"kubernetes.io/projected/db65d55c-51e3-4303-b819-3f92da3814d9-kube-api-access-k9gd7\") pod \"community-operators-fnfbv\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.774849 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"77619568dfa73e3ecda69150e0287ac6935b22a3caa093a50313b55ca16dbb30"} Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.808264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" event={"ID":"18bc5c62-7469-4926-a3e2-fe9eb48844c8","Type":"ContainerStarted","Data":"13ece6765c64da1a1b72729e0ca6df5420ed43f3f16e13e1229f82a95b82ab59"} Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.819266 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"064c20c9adb81a696cf3c31fa0adb9edd42fad4983e2a02bdc504df91e1c127f"} Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.821133 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.822247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.822418 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-utilities\") pod \"certified-operators-8zllg\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.822493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-catalog-content\") pod \"certified-operators-8zllg\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.822621 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.32260213 +0000 UTC m=+152.141285425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.822963 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr6gz\" (UniqueName: \"kubernetes.io/projected/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-kube-api-access-vr6gz\") pod \"certified-operators-8zllg\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.823042 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.824515 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.324506042 +0000 UTC m=+152.143189337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.830209 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t4bkt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.830248 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" podUID="66159345-0259-49a9-a234-ce7520f5b6c6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.830862 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.947873 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.948332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-catalog-content\") pod \"certified-operators-8zllg\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.948399 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr6gz\" (UniqueName: \"kubernetes.io/projected/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-kube-api-access-vr6gz\") pod \"certified-operators-8zllg\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.948475 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-utilities\") pod \"certified-operators-8zllg\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.948921 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-utilities\") pod \"certified-operators-8zllg\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:53 crc kubenswrapper[4805]: E0216 20:58:53.949051 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.449028145 +0000 UTC m=+152.267711440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.949609 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-catalog-content\") pod \"certified-operators-8zllg\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.990442 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ql9vs"] Feb 16 20:58:53 crc kubenswrapper[4805]: I0216 20:58:53.999568 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr6gz\" (UniqueName: \"kubernetes.io/projected/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-kube-api-access-vr6gz\") pod \"certified-operators-8zllg\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.017817 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.049319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.049943 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.549929296 +0000 UTC m=+152.368612581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.070684 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.109397 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:54 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:54 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:54 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.109459 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.152888 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.153109 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.653055388 +0000 UTC m=+152.471738683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.153303 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.153633 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.653621653 +0000 UTC m=+152.472304948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.225574 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnfbv"] Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.254678 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.254858 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.754835084 +0000 UTC m=+152.573518379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.254961 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.255294 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.755269436 +0000 UTC m=+152.573952721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.357355 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.358304 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.858285925 +0000 UTC m=+152.676969220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.374110 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8zllg"] Feb 16 20:58:54 crc kubenswrapper[4805]: W0216 20:58:54.410036 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82acaa8_7c4e_40ea_985f_e5cc3faa0475.slice/crio-46cc68863ad9c63d16ea82bf3fbc03c1ac6db94f10d0f17858a2ebb1a99b92b9 WatchSource:0}: Error finding container 46cc68863ad9c63d16ea82bf3fbc03c1ac6db94f10d0f17858a2ebb1a99b92b9: Status 404 returned error can't find the container with id 46cc68863ad9c63d16ea82bf3fbc03c1ac6db94f10d0f17858a2ebb1a99b92b9 Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.459169 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.459505 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:54.959494224 +0000 UTC m=+152.778177509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.481655 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.487267 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-prjll" Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.560227 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.560399 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.060378115 +0000 UTC m=+152.879061400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.560619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.561518 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.061503616 +0000 UTC m=+152.880186911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.653961 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2acb9625_6b32_480b_9f3d_97976930c437.slice/crio-4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82acaa8_7c4e_40ea_985f_e5cc3faa0475.slice/crio-80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2acb9625_6b32_480b_9f3d_97976930c437.slice/crio-conmon-4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dccaada_bb80_4a57_b9f2_5b190830fc87.slice/crio-conmon-02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1.scope\": RecentStats: unable to find data in memory cache]" Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.662596 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.662745 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.162709156 +0000 UTC m=+152.981392451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.663100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.663493 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.163483367 +0000 UTC m=+152.982166662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.763993 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.764199 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.264169272 +0000 UTC m=+153.082852567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.764513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.764894 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.264877482 +0000 UTC m=+153.083560787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.809212 4805 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.837886 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerID="02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1" exitCode=0 Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.837946 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql9vs" event={"ID":"2dccaada-bb80-4a57-b9f2-5b190830fc87","Type":"ContainerDied","Data":"02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.837972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql9vs" event={"ID":"2dccaada-bb80-4a57-b9f2-5b190830fc87","Type":"ContainerStarted","Data":"ef68ef12c1478e6050aeb0eeb8a69f6c8f626fe23e1b96790a54b584dcea9f72"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.840118 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.843690 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d","Type":"ContainerStarted","Data":"6abdf31f84410477e7a81f87fd1722ec95b4c982294b26d6dc108f1e897a0fce"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.843755 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d","Type":"ContainerStarted","Data":"1eb74cac4abc6877e72d6032b37cf3004e1799c074ebf4443e43ea724f145fc9"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.847514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" event={"ID":"18bc5c62-7469-4926-a3e2-fe9eb48844c8","Type":"ContainerStarted","Data":"94cc3c3bb6cb7283af464d5e6237630c0e9c8cb94ced814d9765372a4625df24"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.847574 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" event={"ID":"18bc5c62-7469-4926-a3e2-fe9eb48844c8","Type":"ContainerStarted","Data":"bb3d9a321d067af3de9fdb446140dd3fce4e9e325c5d8a249f11cf04cbe4489e"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.853413 4805 generic.go:334] "Generic (PLEG): container finished" podID="db65d55c-51e3-4303-b819-3f92da3814d9" containerID="b83991a1d331944214b2309721a5910725e369af1f19c5397bc7a92b610e777b" exitCode=0 Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.853496 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnfbv" event={"ID":"db65d55c-51e3-4303-b819-3f92da3814d9","Type":"ContainerDied","Data":"b83991a1d331944214b2309721a5910725e369af1f19c5397bc7a92b610e777b"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.853520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnfbv" event={"ID":"db65d55c-51e3-4303-b819-3f92da3814d9","Type":"ContainerStarted","Data":"d7589cbc694cb37b31619a0697b72a7e3b61b19cdf4372be03ba07b98c064cbc"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.856068 4805 generic.go:334] "Generic (PLEG): container finished" podID="2acb9625-6b32-480b-9f3d-97976930c437" containerID="4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478" exitCode=0 Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.856140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnj78" event={"ID":"2acb9625-6b32-480b-9f3d-97976930c437","Type":"ContainerDied","Data":"4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.856184 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnj78" event={"ID":"2acb9625-6b32-480b-9f3d-97976930c437","Type":"ContainerStarted","Data":"c16b0057acf5626b78e0f9c42cb3cd255cea3d51ae964742cea78ef67eaec5ec"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.860653 4805 generic.go:334] "Generic (PLEG): container finished" podID="a451d6a2-4e84-4838-89be-08a88869a68e" containerID="f2648cd9bb592c1d12ae53417781e41502d03104d2820858a4a54683fcb989b4" exitCode=0 Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.860749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" event={"ID":"a451d6a2-4e84-4838-89be-08a88869a68e","Type":"ContainerDied","Data":"f2648cd9bb592c1d12ae53417781e41502d03104d2820858a4a54683fcb989b4"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.864410 4805 generic.go:334] "Generic (PLEG): container finished" podID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerID="80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0" exitCode=0 Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.865368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zllg" event={"ID":"c82acaa8-7c4e-40ea-985f-e5cc3faa0475","Type":"ContainerDied","Data":"80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.865395 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zllg" event={"ID":"c82acaa8-7c4e-40ea-985f-e5cc3faa0475","Type":"ContainerStarted","Data":"46cc68863ad9c63d16ea82bf3fbc03c1ac6db94f10d0f17858a2ebb1a99b92b9"} Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.866832 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.868032 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.368013414 +0000 UTC m=+153.186696719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.885118 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.885100848 podStartE2EDuration="1.885100848s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:54.884874942 +0000 UTC m=+152.703558247" watchObservedRunningTime="2026-02-16 20:58:54.885100848 +0000 UTC m=+152.703784153" Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.961817 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-r5h2d" podStartSLOduration=11.961800582 podStartE2EDuration="11.961800582s" podCreationTimestamp="2026-02-16 20:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:54.958713338 +0000 UTC m=+152.777396653" watchObservedRunningTime="2026-02-16 20:58:54.961800582 +0000 UTC m=+152.780483877" Feb 16 20:58:54 crc kubenswrapper[4805]: I0216 20:58:54.968930 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:54 crc kubenswrapper[4805]: E0216 20:58:54.970592 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.47055976 +0000 UTC m=+153.289243055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.069897 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.070107 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.570079654 +0000 UTC m=+153.388762949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.070238 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.070586 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.570574627 +0000 UTC m=+153.389257922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.081326 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrj84"] Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.082885 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.084995 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.099029 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrj84"] Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.108986 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:55 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:55 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:55 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.109056 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.171239 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.171357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-utilities\") pod \"redhat-marketplace-hrj84\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.171378 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6hwh\" (UniqueName: \"kubernetes.io/projected/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-kube-api-access-g6hwh\") pod \"redhat-marketplace-hrj84\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.171408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-catalog-content\") pod \"redhat-marketplace-hrj84\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.171593 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.67154975 +0000 UTC m=+153.490233085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.273521 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.273638 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-utilities\") pod \"redhat-marketplace-hrj84\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.273677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6hwh\" (UniqueName: \"kubernetes.io/projected/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-kube-api-access-g6hwh\") pod \"redhat-marketplace-hrj84\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.273774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-catalog-content\") pod \"redhat-marketplace-hrj84\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.274015 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.773995415 +0000 UTC m=+153.592678720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.274326 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-utilities\") pod \"redhat-marketplace-hrj84\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.274493 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-catalog-content\") pod \"redhat-marketplace-hrj84\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.302976 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6hwh\" (UniqueName: \"kubernetes.io/projected/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-kube-api-access-g6hwh\") pod \"redhat-marketplace-hrj84\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.362347 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.363640 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.368585 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.369032 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.374882 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.375123 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.875089831 +0000 UTC m=+153.693773166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.375226 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.375284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.375389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.375701 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.875686337 +0000 UTC m=+153.694369672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.378904 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.401326 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.476640 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.476895 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.476955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.477090 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:55.977064542 +0000 UTC m=+153.795747837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.477096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.478477 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7479h"] Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.480131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.499928 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.513703 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7479h"] Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.581471 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-catalog-content\") pod \"redhat-marketplace-7479h\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.581528 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-utilities\") pod \"redhat-marketplace-7479h\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.581583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8jhq\" (UniqueName: \"kubernetes.io/projected/b5594412-8308-44f4-9f7e-15a2411d7f6a-kube-api-access-t8jhq\") pod \"redhat-marketplace-7479h\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.581616 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.582030 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:56.081916331 +0000 UTC m=+153.900599626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.645423 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t4bkt" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.683050 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.683216 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:56.183183992 +0000 UTC m=+154.001867287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.683288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.683438 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-catalog-content\") pod \"redhat-marketplace-7479h\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.683517 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-utilities\") pod \"redhat-marketplace-7479h\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.683616 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:56.183597713 +0000 UTC m=+154.002281078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.683689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8jhq\" (UniqueName: \"kubernetes.io/projected/b5594412-8308-44f4-9f7e-15a2411d7f6a-kube-api-access-t8jhq\") pod \"redhat-marketplace-7479h\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.683840 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-catalog-content\") pod \"redhat-marketplace-7479h\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.684075 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-utilities\") pod \"redhat-marketplace-7479h\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.699471 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8jhq\" (UniqueName: \"kubernetes.io/projected/b5594412-8308-44f4-9f7e-15a2411d7f6a-kube-api-access-t8jhq\") pod \"redhat-marketplace-7479h\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.772702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.784681 4805 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T20:58:54.809249587Z","Handler":null,"Name":""} Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.785204 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.785485 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 20:58:56.285461931 +0000 UTC m=+154.104145226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.785571 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:55 crc kubenswrapper[4805]: E0216 20:58:55.785882 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 20:58:56.285871962 +0000 UTC m=+154.104555257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w44f5" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.794204 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.828169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.872050 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bw2cs" Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.876408 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab82b9d7-ce88-435a-96eb-171b1a7a5d0d" containerID="6abdf31f84410477e7a81f87fd1722ec95b4c982294b26d6dc108f1e897a0fce" exitCode=0 Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.876747 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d","Type":"ContainerDied","Data":"6abdf31f84410477e7a81f87fd1722ec95b4c982294b26d6dc108f1e897a0fce"} Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.887950 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.891291 4805 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.891331 4805 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.929281 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrj84"] Feb 16 20:58:55 crc kubenswrapper[4805]: I0216 20:58:55.946577 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:55.992594 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.000605 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.000647 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.054625 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.054679 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.060622 4805 patch_prober.go:28] interesting pod/console-f9d7485db-h2zb9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.060675 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-h2zb9" podUID="2530eb64-2099-45e0-9727-ea9987f22ed5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.093495 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w44f5\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.105039 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.105088 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.133104 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:56 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:56 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:56 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.133190 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.212881 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-2t9r2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.212920 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-2t9r2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.212937 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2t9r2" podUID="e81b583e-8f61-44e8-b206-2e7b31ca3626" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.212978 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2t9r2" podUID="e81b583e-8f61-44e8-b206-2e7b31ca3626" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.283681 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mw567"] Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.285706 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.288872 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.302397 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mw567"] Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.328169 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-psbvs" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.344351 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7479h"] Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.353643 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.386498 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4brg" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.394522 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:56 crc kubenswrapper[4805]: W0216 20:58:56.399383 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5594412_8308_44f4_9f7e_15a2411d7f6a.slice/crio-fe57cbdc3191c925e727ecfd9d041d093406d905f7098d6375a68ad2925ad524 WatchSource:0}: Error finding container fe57cbdc3191c925e727ecfd9d041d093406d905f7098d6375a68ad2925ad524: Status 404 returned error can't find the container with id fe57cbdc3191c925e727ecfd9d041d093406d905f7098d6375a68ad2925ad524 Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.405076 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.415332 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a451d6a2-4e84-4838-89be-08a88869a68e-config-volume\") pod \"a451d6a2-4e84-4838-89be-08a88869a68e\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.415411 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7rsf\" (UniqueName: \"kubernetes.io/projected/a451d6a2-4e84-4838-89be-08a88869a68e-kube-api-access-m7rsf\") pod \"a451d6a2-4e84-4838-89be-08a88869a68e\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.415445 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a451d6a2-4e84-4838-89be-08a88869a68e-secret-volume\") pod \"a451d6a2-4e84-4838-89be-08a88869a68e\" (UID: \"a451d6a2-4e84-4838-89be-08a88869a68e\") " Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.415624 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkzw\" (UniqueName: \"kubernetes.io/projected/e1a06996-a3de-413f-b05e-852d5c0fa7ff-kube-api-access-jwkzw\") pod \"redhat-operators-mw567\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.415661 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-catalog-content\") pod \"redhat-operators-mw567\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.415738 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-utilities\") pod \"redhat-operators-mw567\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.416795 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a451d6a2-4e84-4838-89be-08a88869a68e-config-volume" (OuterVolumeSpecName: "config-volume") pod "a451d6a2-4e84-4838-89be-08a88869a68e" (UID: "a451d6a2-4e84-4838-89be-08a88869a68e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.426169 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a451d6a2-4e84-4838-89be-08a88869a68e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a451d6a2-4e84-4838-89be-08a88869a68e" (UID: "a451d6a2-4e84-4838-89be-08a88869a68e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.427276 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a451d6a2-4e84-4838-89be-08a88869a68e-kube-api-access-m7rsf" (OuterVolumeSpecName: "kube-api-access-m7rsf") pod "a451d6a2-4e84-4838-89be-08a88869a68e" (UID: "a451d6a2-4e84-4838-89be-08a88869a68e"). InnerVolumeSpecName "kube-api-access-m7rsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.516514 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwkzw\" (UniqueName: \"kubernetes.io/projected/e1a06996-a3de-413f-b05e-852d5c0fa7ff-kube-api-access-jwkzw\") pod \"redhat-operators-mw567\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.516584 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-catalog-content\") pod \"redhat-operators-mw567\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.516635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-utilities\") pod \"redhat-operators-mw567\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.516695 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a451d6a2-4e84-4838-89be-08a88869a68e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.516707 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7rsf\" (UniqueName: \"kubernetes.io/projected/a451d6a2-4e84-4838-89be-08a88869a68e-kube-api-access-m7rsf\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.516731 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a451d6a2-4e84-4838-89be-08a88869a68e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.517433 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-catalog-content\") pod \"redhat-operators-mw567\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.517446 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-utilities\") pod \"redhat-operators-mw567\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.534367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwkzw\" (UniqueName: \"kubernetes.io/projected/e1a06996-a3de-413f-b05e-852d5c0fa7ff-kube-api-access-jwkzw\") pod \"redhat-operators-mw567\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.652994 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.688551 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-878vf"] Feb 16 20:58:56 crc kubenswrapper[4805]: E0216 20:58:56.688779 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a451d6a2-4e84-4838-89be-08a88869a68e" containerName="collect-profiles" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.688795 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a451d6a2-4e84-4838-89be-08a88869a68e" containerName="collect-profiles" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.688919 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a451d6a2-4e84-4838-89be-08a88869a68e" containerName="collect-profiles" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.689679 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.710987 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-878vf"] Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.730082 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-utilities\") pod \"redhat-operators-878vf\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.730146 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-catalog-content\") pod \"redhat-operators-878vf\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.730199 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvh2q\" (UniqueName: \"kubernetes.io/projected/8509d47c-b1fc-473a-9252-6c50c7a630b7-kube-api-access-dvh2q\") pod \"redhat-operators-878vf\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.843913 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvh2q\" (UniqueName: \"kubernetes.io/projected/8509d47c-b1fc-473a-9252-6c50c7a630b7-kube-api-access-dvh2q\") pod \"redhat-operators-878vf\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.844401 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-utilities\") pod \"redhat-operators-878vf\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.846410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-catalog-content\") pod \"redhat-operators-878vf\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.847469 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-catalog-content\") pod \"redhat-operators-878vf\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.847948 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-utilities\") pod \"redhat-operators-878vf\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.895229 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvh2q\" (UniqueName: \"kubernetes.io/projected/8509d47c-b1fc-473a-9252-6c50c7a630b7-kube-api-access-dvh2q\") pod \"redhat-operators-878vf\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.909115 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w44f5"] Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.928115 4805 generic.go:334] "Generic (PLEG): container finished" podID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerID="7ea736aa870a4b4ec2ddaf1f40b370e6fbd12643039e373c19284fe835dbd94f" exitCode=0 Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.928198 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7479h" event={"ID":"b5594412-8308-44f4-9f7e-15a2411d7f6a","Type":"ContainerDied","Data":"7ea736aa870a4b4ec2ddaf1f40b370e6fbd12643039e373c19284fe835dbd94f"} Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.928230 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7479h" event={"ID":"b5594412-8308-44f4-9f7e-15a2411d7f6a","Type":"ContainerStarted","Data":"fe57cbdc3191c925e727ecfd9d041d093406d905f7098d6375a68ad2925ad524"} Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.944961 4805 generic.go:334] "Generic (PLEG): container finished" podID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerID="c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6" exitCode=0 Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.945375 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrj84" event={"ID":"6bba21b2-c506-44e1-87e9-9ef5067ff1e5","Type":"ContainerDied","Data":"c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6"} Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.945398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrj84" event={"ID":"6bba21b2-c506-44e1-87e9-9ef5067ff1e5","Type":"ContainerStarted","Data":"223c03e00b686a9bed1db31428c7623ff512bab050c77fabdf2e9afad4bd2067"} Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.957210 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa4bf91e-725c-49e3-beff-0ccd28a26fd2","Type":"ContainerStarted","Data":"51541105e833f25d4977c15c7463584f2f8bafa5e8085fccad725fce1dbf7e64"} Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.962383 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.963680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk" event={"ID":"a451d6a2-4e84-4838-89be-08a88869a68e","Type":"ContainerDied","Data":"ec5b82bf0b5957fbc906c78a6d7c579752205f2177531d461654628ad18d7e03"} Feb 16 20:58:56 crc kubenswrapper[4805]: I0216 20:58:56.963739 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec5b82bf0b5957fbc906c78a6d7c579752205f2177531d461654628ad18d7e03" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.048049 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.110138 4805 patch_prober.go:28] interesting pod/router-default-5444994796-fzjtf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 20:58:57 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 16 20:58:57 crc kubenswrapper[4805]: [+]process-running ok Feb 16 20:58:57 crc kubenswrapper[4805]: healthz check failed Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.110193 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fzjtf" podUID="9ebe9ce6-6b40-435b-a14f-85a80c4ce52a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.120126 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mw567"] Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.412324 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.467877 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kube-api-access\") pod \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\" (UID: \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\") " Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.467954 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kubelet-dir\") pod \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\" (UID: \"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d\") " Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.468303 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ab82b9d7-ce88-435a-96eb-171b1a7a5d0d" (UID: "ab82b9d7-ce88-435a-96eb-171b1a7a5d0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.482587 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ab82b9d7-ce88-435a-96eb-171b1a7a5d0d" (UID: "ab82b9d7-ce88-435a-96eb-171b1a7a5d0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.531192 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-878vf"] Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.569571 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.569606 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab82b9d7-ce88-435a-96eb-171b1a7a5d0d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.626929 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.970646 4805 generic.go:334] "Generic (PLEG): container finished" podID="fa4bf91e-725c-49e3-beff-0ccd28a26fd2" containerID="dac0775b4b56f570ccfa9bbb07d137be4d860c54715563d4968cefb9ef3cef7d" exitCode=0 Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.970765 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa4bf91e-725c-49e3-beff-0ccd28a26fd2","Type":"ContainerDied","Data":"dac0775b4b56f570ccfa9bbb07d137be4d860c54715563d4968cefb9ef3cef7d"} Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.975926 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ab82b9d7-ce88-435a-96eb-171b1a7a5d0d","Type":"ContainerDied","Data":"1eb74cac4abc6877e72d6032b37cf3004e1799c074ebf4443e43ea724f145fc9"} Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.975953 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.975959 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb74cac4abc6877e72d6032b37cf3004e1799c074ebf4443e43ea724f145fc9" Feb 16 20:58:57 crc kubenswrapper[4805]: I0216 20:58:57.978591 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-878vf" event={"ID":"8509d47c-b1fc-473a-9252-6c50c7a630b7","Type":"ContainerStarted","Data":"bf17409b8513c8a106b6cec9ec5a098d2bbce9f606bee4c40cd32b1af53bd83c"} Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:57.997605 4805 generic.go:334] "Generic (PLEG): container finished" podID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerID="7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e" exitCode=0 Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:57.997665 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw567" event={"ID":"e1a06996-a3de-413f-b05e-852d5c0fa7ff","Type":"ContainerDied","Data":"7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e"} Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:57.997686 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw567" event={"ID":"e1a06996-a3de-413f-b05e-852d5c0fa7ff","Type":"ContainerStarted","Data":"909eeabe438281a1189410aa1d1ae522be90680c773468b5ba862a935b1c7a1d"} Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:58.020027 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" event={"ID":"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729","Type":"ContainerStarted","Data":"7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5"} Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:58.020063 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" event={"ID":"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729","Type":"ContainerStarted","Data":"fec177d927ec213c95dbc03510073b0bf6bae59f00b4b3dce2a9047cbef53b9c"} Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:58.020228 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:58.044040 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" podStartSLOduration=134.044022275 podStartE2EDuration="2m14.044022275s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:58.043507022 +0000 UTC m=+155.862190317" watchObservedRunningTime="2026-02-16 20:58:58.044022275 +0000 UTC m=+155.862705560" Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:58.113006 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:58.115817 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fzjtf" Feb 16 20:58:58 crc kubenswrapper[4805]: I0216 20:58:58.171352 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fmmsk" Feb 16 20:58:59 crc kubenswrapper[4805]: I0216 20:58:59.028554 4805 generic.go:334] "Generic (PLEG): container finished" podID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerID="10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627" exitCode=0 Feb 16 20:58:59 crc kubenswrapper[4805]: I0216 20:58:59.029125 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-878vf" event={"ID":"8509d47c-b1fc-473a-9252-6c50c7a630b7","Type":"ContainerDied","Data":"10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627"} Feb 16 20:58:59 crc kubenswrapper[4805]: I0216 20:58:59.421171 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:58:59 crc kubenswrapper[4805]: I0216 20:58:59.519712 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kube-api-access\") pod \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\" (UID: \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\") " Feb 16 20:58:59 crc kubenswrapper[4805]: I0216 20:58:59.520182 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kubelet-dir\") pod \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\" (UID: \"fa4bf91e-725c-49e3-beff-0ccd28a26fd2\") " Feb 16 20:58:59 crc kubenswrapper[4805]: I0216 20:58:59.520283 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa4bf91e-725c-49e3-beff-0ccd28a26fd2" (UID: "fa4bf91e-725c-49e3-beff-0ccd28a26fd2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:59 crc kubenswrapper[4805]: I0216 20:58:59.520489 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 20:58:59 crc kubenswrapper[4805]: I0216 20:58:59.526017 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa4bf91e-725c-49e3-beff-0ccd28a26fd2" (UID: "fa4bf91e-725c-49e3-beff-0ccd28a26fd2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:59 crc kubenswrapper[4805]: I0216 20:58:59.621327 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4bf91e-725c-49e3-beff-0ccd28a26fd2-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:00 crc kubenswrapper[4805]: I0216 20:59:00.046279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa4bf91e-725c-49e3-beff-0ccd28a26fd2","Type":"ContainerDied","Data":"51541105e833f25d4977c15c7463584f2f8bafa5e8085fccad725fce1dbf7e64"} Feb 16 20:59:00 crc kubenswrapper[4805]: I0216 20:59:00.046330 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51541105e833f25d4977c15c7463584f2f8bafa5e8085fccad725fce1dbf7e64" Feb 16 20:59:00 crc kubenswrapper[4805]: I0216 20:59:00.046389 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 20:59:06 crc kubenswrapper[4805]: I0216 20:59:06.058639 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:59:06 crc kubenswrapper[4805]: I0216 20:59:06.062672 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 20:59:06 crc kubenswrapper[4805]: I0216 20:59:06.201621 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-2t9r2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 16 20:59:06 crc kubenswrapper[4805]: I0216 20:59:06.201673 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2t9r2" podUID="e81b583e-8f61-44e8-b206-2e7b31ca3626" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 16 20:59:06 crc kubenswrapper[4805]: I0216 20:59:06.201709 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-2t9r2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 16 20:59:06 crc kubenswrapper[4805]: I0216 20:59:06.201811 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2t9r2" podUID="e81b583e-8f61-44e8-b206-2e7b31ca3626" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 16 20:59:07 crc kubenswrapper[4805]: I0216 20:59:07.486210 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:59:07 crc kubenswrapper[4805]: I0216 20:59:07.495489 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68747e4a-6576-44c3-b663-250315f6712f-metrics-certs\") pod \"network-metrics-daemon-b6xdh\" (UID: \"68747e4a-6576-44c3-b663-250315f6712f\") " pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:59:07 crc kubenswrapper[4805]: I0216 20:59:07.521824 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b6xdh" Feb 16 20:59:08 crc kubenswrapper[4805]: I0216 20:59:08.099393 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 20:59:08 crc kubenswrapper[4805]: I0216 20:59:08.099446 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 20:59:09 crc kubenswrapper[4805]: I0216 20:59:09.244922 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-45jmj"] Feb 16 20:59:09 crc kubenswrapper[4805]: I0216 20:59:09.245607 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" podUID="e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" containerName="controller-manager" containerID="cri-o://858970c68d4a78819972ca85cd81e6f15f8862826331839ed56552b0885c5225" gracePeriod=30 Feb 16 20:59:09 crc kubenswrapper[4805]: I0216 20:59:09.286271 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v"] Feb 16 20:59:09 crc kubenswrapper[4805]: I0216 20:59:09.286890 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" podUID="cc45d729-38e4-4964-b5b6-de896f734fe8" containerName="route-controller-manager" containerID="cri-o://72e2493fbf1e2b6a9f4b3e6c404cfdfdcda772800466c2d7c77c9c933e2e4792" gracePeriod=30 Feb 16 20:59:11 crc kubenswrapper[4805]: I0216 20:59:11.143472 4805 generic.go:334] "Generic (PLEG): container finished" podID="e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" containerID="858970c68d4a78819972ca85cd81e6f15f8862826331839ed56552b0885c5225" exitCode=0 Feb 16 20:59:11 crc kubenswrapper[4805]: I0216 20:59:11.143510 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" event={"ID":"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de","Type":"ContainerDied","Data":"858970c68d4a78819972ca85cd81e6f15f8862826331839ed56552b0885c5225"} Feb 16 20:59:14 crc kubenswrapper[4805]: I0216 20:59:14.167573 4805 generic.go:334] "Generic (PLEG): container finished" podID="cc45d729-38e4-4964-b5b6-de896f734fe8" containerID="72e2493fbf1e2b6a9f4b3e6c404cfdfdcda772800466c2d7c77c9c933e2e4792" exitCode=0 Feb 16 20:59:14 crc kubenswrapper[4805]: I0216 20:59:14.167766 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" event={"ID":"cc45d729-38e4-4964-b5b6-de896f734fe8","Type":"ContainerDied","Data":"72e2493fbf1e2b6a9f4b3e6c404cfdfdcda772800466c2d7c77c9c933e2e4792"} Feb 16 20:59:15 crc kubenswrapper[4805]: I0216 20:59:15.578887 4805 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gsn4v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 20:59:15 crc kubenswrapper[4805]: I0216 20:59:15.579136 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" podUID="cc45d729-38e4-4964-b5b6-de896f734fe8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 20:59:16 crc kubenswrapper[4805]: I0216 20:59:16.219010 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2t9r2" Feb 16 20:59:16 crc kubenswrapper[4805]: I0216 20:59:16.401037 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 20:59:16 crc kubenswrapper[4805]: I0216 20:59:16.909368 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-45jmj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" start-of-body= Feb 16 20:59:16 crc kubenswrapper[4805]: I0216 20:59:16.909437 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" podUID="e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.253376 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.297259 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-94c47745d-csgw5"] Feb 16 20:59:18 crc kubenswrapper[4805]: E0216 20:59:18.297534 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4bf91e-725c-49e3-beff-0ccd28a26fd2" containerName="pruner" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.297551 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4bf91e-725c-49e3-beff-0ccd28a26fd2" containerName="pruner" Feb 16 20:59:18 crc kubenswrapper[4805]: E0216 20:59:18.297568 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab82b9d7-ce88-435a-96eb-171b1a7a5d0d" containerName="pruner" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.297576 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab82b9d7-ce88-435a-96eb-171b1a7a5d0d" containerName="pruner" Feb 16 20:59:18 crc kubenswrapper[4805]: E0216 20:59:18.297594 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" containerName="controller-manager" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.297603 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" containerName="controller-manager" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.297765 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4bf91e-725c-49e3-beff-0ccd28a26fd2" containerName="pruner" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.297783 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab82b9d7-ce88-435a-96eb-171b1a7a5d0d" containerName="pruner" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.297797 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" containerName="controller-manager" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.298283 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.305160 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-94c47745d-csgw5"] Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.434776 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvgt9\" (UniqueName: \"kubernetes.io/projected/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-kube-api-access-hvgt9\") pod \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.434856 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-serving-cert\") pod \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.434935 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-proxy-ca-bundles\") pod \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.434975 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-config\") pod \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.435001 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-client-ca\") pod \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\" (UID: \"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de\") " Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.435197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-config\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.435225 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-client-ca\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.435256 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b821df-c32e-4da6-8887-1fcef1a5c6e0-serving-cert\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.435298 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbjr\" (UniqueName: \"kubernetes.io/projected/34b821df-c32e-4da6-8887-1fcef1a5c6e0-kube-api-access-bnbjr\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.435336 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-proxy-ca-bundles\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.435746 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" (UID: "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.435978 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-config" (OuterVolumeSpecName: "config") pod "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" (UID: "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.437017 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-client-ca" (OuterVolumeSpecName: "client-ca") pod "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" (UID: "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.443550 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-kube-api-access-hvgt9" (OuterVolumeSpecName: "kube-api-access-hvgt9") pod "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" (UID: "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de"). InnerVolumeSpecName "kube-api-access-hvgt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.443889 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" (UID: "e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.536661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-config\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.537035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-client-ca\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.537069 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b821df-c32e-4da6-8887-1fcef1a5c6e0-serving-cert\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.537111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnbjr\" (UniqueName: \"kubernetes.io/projected/34b821df-c32e-4da6-8887-1fcef1a5c6e0-kube-api-access-bnbjr\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.537150 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-proxy-ca-bundles\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.537214 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvgt9\" (UniqueName: \"kubernetes.io/projected/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-kube-api-access-hvgt9\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.537228 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.537239 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.537250 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.537260 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.538053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-config\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.538265 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-proxy-ca-bundles\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.538643 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-client-ca\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.542251 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b821df-c32e-4da6-8887-1fcef1a5c6e0-serving-cert\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.554602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnbjr\" (UniqueName: \"kubernetes.io/projected/34b821df-c32e-4da6-8887-1fcef1a5c6e0-kube-api-access-bnbjr\") pod \"controller-manager-94c47745d-csgw5\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:18 crc kubenswrapper[4805]: I0216 20:59:18.615659 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:19 crc kubenswrapper[4805]: I0216 20:59:19.195002 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" event={"ID":"e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de","Type":"ContainerDied","Data":"4a54ee263569c625c9bee9b3f127b3fa8febe165d6f18a828ba66251de96e787"} Feb 16 20:59:19 crc kubenswrapper[4805]: I0216 20:59:19.195398 4805 scope.go:117] "RemoveContainer" containerID="858970c68d4a78819972ca85cd81e6f15f8862826331839ed56552b0885c5225" Feb 16 20:59:19 crc kubenswrapper[4805]: I0216 20:59:19.195081 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-45jmj" Feb 16 20:59:19 crc kubenswrapper[4805]: I0216 20:59:19.253098 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-45jmj"] Feb 16 20:59:19 crc kubenswrapper[4805]: I0216 20:59:19.255906 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-45jmj"] Feb 16 20:59:19 crc kubenswrapper[4805]: I0216 20:59:19.604142 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de" path="/var/lib/kubelet/pods/e9d90f3e-1a5e-4b1c-b22a-706c1c7ee3de/volumes" Feb 16 20:59:24 crc kubenswrapper[4805]: I0216 20:59:24.231404 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wwm8v"] Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.198233 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.231626 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85"] Feb 16 20:59:25 crc kubenswrapper[4805]: E0216 20:59:25.232171 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc45d729-38e4-4964-b5b6-de896f734fe8" containerName="route-controller-manager" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.233042 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc45d729-38e4-4964-b5b6-de896f734fe8" containerName="route-controller-manager" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.233615 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.235901 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc45d729-38e4-4964-b5b6-de896f734fe8" containerName="route-controller-manager" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.236359 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v" event={"ID":"cc45d729-38e4-4964-b5b6-de896f734fe8","Type":"ContainerDied","Data":"75842bf86b823e8fb6596afb8be5a81f0a1f69ee6533670c082d795318525d59"} Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.236510 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.239087 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85"] Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.297411 4805 scope.go:117] "RemoveContainer" containerID="72e2493fbf1e2b6a9f4b3e6c404cfdfdcda772800466c2d7c77c9c933e2e4792" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.339603 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-client-ca\") pod \"cc45d729-38e4-4964-b5b6-de896f734fe8\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.341344 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-client-ca" (OuterVolumeSpecName: "client-ca") pod "cc45d729-38e4-4964-b5b6-de896f734fe8" (UID: "cc45d729-38e4-4964-b5b6-de896f734fe8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.341586 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc45d729-38e4-4964-b5b6-de896f734fe8-serving-cert\") pod \"cc45d729-38e4-4964-b5b6-de896f734fe8\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.341946 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-config\") pod \"cc45d729-38e4-4964-b5b6-de896f734fe8\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.342007 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59jl6\" (UniqueName: \"kubernetes.io/projected/cc45d729-38e4-4964-b5b6-de896f734fe8-kube-api-access-59jl6\") pod \"cc45d729-38e4-4964-b5b6-de896f734fe8\" (UID: \"cc45d729-38e4-4964-b5b6-de896f734fe8\") " Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.342206 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-config\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.342255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e4d077-75a0-4e39-8fc2-da96ab91e36d-serving-cert\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.342311 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbpcr\" (UniqueName: \"kubernetes.io/projected/19e4d077-75a0-4e39-8fc2-da96ab91e36d-kube-api-access-qbpcr\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.342442 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-client-ca\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.342499 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-config" (OuterVolumeSpecName: "config") pod "cc45d729-38e4-4964-b5b6-de896f734fe8" (UID: "cc45d729-38e4-4964-b5b6-de896f734fe8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.342640 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.342657 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc45d729-38e4-4964-b5b6-de896f734fe8-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.347672 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc45d729-38e4-4964-b5b6-de896f734fe8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cc45d729-38e4-4964-b5b6-de896f734fe8" (UID: "cc45d729-38e4-4964-b5b6-de896f734fe8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.350985 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc45d729-38e4-4964-b5b6-de896f734fe8-kube-api-access-59jl6" (OuterVolumeSpecName: "kube-api-access-59jl6") pod "cc45d729-38e4-4964-b5b6-de896f734fe8" (UID: "cc45d729-38e4-4964-b5b6-de896f734fe8"). InnerVolumeSpecName "kube-api-access-59jl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.443655 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-client-ca\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.443932 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-config\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.443988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e4d077-75a0-4e39-8fc2-da96ab91e36d-serving-cert\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.444025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbpcr\" (UniqueName: \"kubernetes.io/projected/19e4d077-75a0-4e39-8fc2-da96ab91e36d-kube-api-access-qbpcr\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.444142 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59jl6\" (UniqueName: \"kubernetes.io/projected/cc45d729-38e4-4964-b5b6-de896f734fe8-kube-api-access-59jl6\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.444159 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc45d729-38e4-4964-b5b6-de896f734fe8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.445167 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-client-ca\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.447976 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-config\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.448986 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e4d077-75a0-4e39-8fc2-da96ab91e36d-serving-cert\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.462865 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbpcr\" (UniqueName: \"kubernetes.io/projected/19e4d077-75a0-4e39-8fc2-da96ab91e36d-kube-api-access-qbpcr\") pod \"route-controller-manager-6496f877d6-r9k85\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.566173 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.586350 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v"] Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.594543 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gsn4v"] Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.626455 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc45d729-38e4-4964-b5b6-de896f734fe8" path="/var/lib/kubelet/pods/cc45d729-38e4-4964-b5b6-de896f734fe8/volumes" Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.630517 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-94c47745d-csgw5"] Feb 16 20:59:25 crc kubenswrapper[4805]: W0216 20:59:25.660951 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34b821df_c32e_4da6_8887_1fcef1a5c6e0.slice/crio-8fd3503d1a291b9bb458709bd9195bae5eecf88480bfc9bd817e27b953ecf570 WatchSource:0}: Error finding container 8fd3503d1a291b9bb458709bd9195bae5eecf88480bfc9bd817e27b953ecf570: Status 404 returned error can't find the container with id 8fd3503d1a291b9bb458709bd9195bae5eecf88480bfc9bd817e27b953ecf570 Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.680742 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-b6xdh"] Feb 16 20:59:25 crc kubenswrapper[4805]: I0216 20:59:25.828139 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85"] Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.239873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" event={"ID":"68747e4a-6576-44c3-b663-250315f6712f","Type":"ContainerStarted","Data":"d569d7dd201e2d32cbf1082ee7baeec2728cc24d9f3d673a546c2cecaf44cb2e"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.249584 4805 generic.go:334] "Generic (PLEG): container finished" podID="2acb9625-6b32-480b-9f3d-97976930c437" containerID="a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa" exitCode=0 Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.249642 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnj78" event={"ID":"2acb9625-6b32-480b-9f3d-97976930c437","Type":"ContainerDied","Data":"a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.252378 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" event={"ID":"34b821df-c32e-4da6-8887-1fcef1a5c6e0","Type":"ContainerStarted","Data":"fefab94f6585fee499f2b905f1ec1ca3354c0ecb8836aa9bd7d75150a6ac1153"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.252901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" event={"ID":"34b821df-c32e-4da6-8887-1fcef1a5c6e0","Type":"ContainerStarted","Data":"8fd3503d1a291b9bb458709bd9195bae5eecf88480bfc9bd817e27b953ecf570"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.258382 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql9vs" event={"ID":"2dccaada-bb80-4a57-b9f2-5b190830fc87","Type":"ContainerDied","Data":"54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.254615 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerID="54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b" exitCode=0 Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.267710 4805 generic.go:334] "Generic (PLEG): container finished" podID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerID="fffcf0ea05883de8e9fc2590a296e807ee3f6f07beed74b4755707cfe0356c81" exitCode=0 Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.267798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7479h" event={"ID":"b5594412-8308-44f4-9f7e-15a2411d7f6a","Type":"ContainerDied","Data":"fffcf0ea05883de8e9fc2590a296e807ee3f6f07beed74b4755707cfe0356c81"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.272784 4805 generic.go:334] "Generic (PLEG): container finished" podID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerID="05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f" exitCode=0 Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.272829 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw567" event={"ID":"e1a06996-a3de-413f-b05e-852d5c0fa7ff","Type":"ContainerDied","Data":"05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.278896 4805 generic.go:334] "Generic (PLEG): container finished" podID="db65d55c-51e3-4303-b819-3f92da3814d9" containerID="653ac299c2bf3639c6c9f34dd612afee851bf3d7a2d9e5dac4360b977aacb3fa" exitCode=0 Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.278953 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnfbv" event={"ID":"db65d55c-51e3-4303-b819-3f92da3814d9","Type":"ContainerDied","Data":"653ac299c2bf3639c6c9f34dd612afee851bf3d7a2d9e5dac4360b977aacb3fa"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.282437 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" event={"ID":"19e4d077-75a0-4e39-8fc2-da96ab91e36d","Type":"ContainerStarted","Data":"a80aec888342b81825e069c3979e8a84f24305bb7c17bc029377ca4abac99441"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.291543 4805 generic.go:334] "Generic (PLEG): container finished" podID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerID="4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4" exitCode=0 Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.291610 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zllg" event={"ID":"c82acaa8-7c4e-40ea-985f-e5cc3faa0475","Type":"ContainerDied","Data":"4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.300380 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-878vf" event={"ID":"8509d47c-b1fc-473a-9252-6c50c7a630b7","Type":"ContainerStarted","Data":"be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.304187 4805 generic.go:334] "Generic (PLEG): container finished" podID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerID="af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a" exitCode=0 Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.304235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrj84" event={"ID":"6bba21b2-c506-44e1-87e9-9ef5067ff1e5","Type":"ContainerDied","Data":"af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a"} Feb 16 20:59:26 crc kubenswrapper[4805]: I0216 20:59:26.340383 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b7jmg" Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.313651 4805 generic.go:334] "Generic (PLEG): container finished" podID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerID="be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca" exitCode=0 Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.313864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-878vf" event={"ID":"8509d47c-b1fc-473a-9252-6c50c7a630b7","Type":"ContainerDied","Data":"be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca"} Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.319380 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" event={"ID":"68747e4a-6576-44c3-b663-250315f6712f","Type":"ContainerStarted","Data":"8a43fc63cc216314655bf66a298682b0010e096292619ad64cc4096746a95290"} Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.319412 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b6xdh" event={"ID":"68747e4a-6576-44c3-b663-250315f6712f","Type":"ContainerStarted","Data":"219358d6ce7b41a58e3aa41c74c6a1bd44940c1aa697d36ce1ba8aac57f1d8b1"} Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.322139 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" event={"ID":"19e4d077-75a0-4e39-8fc2-da96ab91e36d","Type":"ContainerStarted","Data":"4c150a14529aa8817e3d8690895f08eb74e07f8880547c9f45798a090d42a9c8"} Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.322227 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.322522 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.329647 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.332281 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.348736 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" podStartSLOduration=18.348697717 podStartE2EDuration="18.348697717s" podCreationTimestamp="2026-02-16 20:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:27.346083026 +0000 UTC m=+185.164766391" watchObservedRunningTime="2026-02-16 20:59:27.348697717 +0000 UTC m=+185.167381012" Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.402380 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-b6xdh" podStartSLOduration=163.40219738 podStartE2EDuration="2m43.40219738s" podCreationTimestamp="2026-02-16 20:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:27.397835472 +0000 UTC m=+185.216518777" watchObservedRunningTime="2026-02-16 20:59:27.40219738 +0000 UTC m=+185.220880685" Feb 16 20:59:27 crc kubenswrapper[4805]: I0216 20:59:27.406035 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" podStartSLOduration=18.406019314 podStartE2EDuration="18.406019314s" podCreationTimestamp="2026-02-16 20:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:27.378681791 +0000 UTC m=+185.197365096" watchObservedRunningTime="2026-02-16 20:59:27.406019314 +0000 UTC m=+185.224702609" Feb 16 20:59:28 crc kubenswrapper[4805]: I0216 20:59:28.329796 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql9vs" event={"ID":"2dccaada-bb80-4a57-b9f2-5b190830fc87","Type":"ContainerStarted","Data":"79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6"} Feb 16 20:59:28 crc kubenswrapper[4805]: I0216 20:59:28.349405 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ql9vs" podStartSLOduration=2.313077687 podStartE2EDuration="35.349390125s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="2026-02-16 20:58:54.839846629 +0000 UTC m=+152.658529924" lastFinishedPulling="2026-02-16 20:59:27.876159077 +0000 UTC m=+185.694842362" observedRunningTime="2026-02-16 20:59:28.348922282 +0000 UTC m=+186.167605577" watchObservedRunningTime="2026-02-16 20:59:28.349390125 +0000 UTC m=+186.168073420" Feb 16 20:59:29 crc kubenswrapper[4805]: I0216 20:59:29.181046 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-94c47745d-csgw5"] Feb 16 20:59:29 crc kubenswrapper[4805]: I0216 20:59:29.273213 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85"] Feb 16 20:59:30 crc kubenswrapper[4805]: I0216 20:59:30.343856 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnj78" event={"ID":"2acb9625-6b32-480b-9f3d-97976930c437","Type":"ContainerStarted","Data":"8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55"} Feb 16 20:59:30 crc kubenswrapper[4805]: I0216 20:59:30.343972 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" podUID="19e4d077-75a0-4e39-8fc2-da96ab91e36d" containerName="route-controller-manager" containerID="cri-o://4c150a14529aa8817e3d8690895f08eb74e07f8880547c9f45798a090d42a9c8" gracePeriod=30 Feb 16 20:59:30 crc kubenswrapper[4805]: I0216 20:59:30.344363 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" podUID="34b821df-c32e-4da6-8887-1fcef1a5c6e0" containerName="controller-manager" containerID="cri-o://fefab94f6585fee499f2b905f1ec1ca3354c0ecb8836aa9bd7d75150a6ac1153" gracePeriod=30 Feb 16 20:59:30 crc kubenswrapper[4805]: I0216 20:59:30.366172 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bnj78" podStartSLOduration=3.268967018 podStartE2EDuration="37.366156771s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="2026-02-16 20:58:54.859743249 +0000 UTC m=+152.678426544" lastFinishedPulling="2026-02-16 20:59:28.956933002 +0000 UTC m=+186.775616297" observedRunningTime="2026-02-16 20:59:30.3657556 +0000 UTC m=+188.184438895" watchObservedRunningTime="2026-02-16 20:59:30.366156771 +0000 UTC m=+188.184840066" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.354045 4805 generic.go:334] "Generic (PLEG): container finished" podID="19e4d077-75a0-4e39-8fc2-da96ab91e36d" containerID="4c150a14529aa8817e3d8690895f08eb74e07f8880547c9f45798a090d42a9c8" exitCode=0 Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.354164 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" event={"ID":"19e4d077-75a0-4e39-8fc2-da96ab91e36d","Type":"ContainerDied","Data":"4c150a14529aa8817e3d8690895f08eb74e07f8880547c9f45798a090d42a9c8"} Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.354328 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" event={"ID":"19e4d077-75a0-4e39-8fc2-da96ab91e36d","Type":"ContainerDied","Data":"a80aec888342b81825e069c3979e8a84f24305bb7c17bc029377ca4abac99441"} Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.354345 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a80aec888342b81825e069c3979e8a84f24305bb7c17bc029377ca4abac99441" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.355654 4805 generic.go:334] "Generic (PLEG): container finished" podID="34b821df-c32e-4da6-8887-1fcef1a5c6e0" containerID="fefab94f6585fee499f2b905f1ec1ca3354c0ecb8836aa9bd7d75150a6ac1153" exitCode=0 Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.355703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" event={"ID":"34b821df-c32e-4da6-8887-1fcef1a5c6e0","Type":"ContainerDied","Data":"fefab94f6585fee499f2b905f1ec1ca3354c0ecb8836aa9bd7d75150a6ac1153"} Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.358390 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zllg" event={"ID":"c82acaa8-7c4e-40ea-985f-e5cc3faa0475","Type":"ContainerStarted","Data":"c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141"} Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.371714 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.395936 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt"] Feb 16 20:59:31 crc kubenswrapper[4805]: E0216 20:59:31.396167 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e4d077-75a0-4e39-8fc2-da96ab91e36d" containerName="route-controller-manager" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.396178 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e4d077-75a0-4e39-8fc2-da96ab91e36d" containerName="route-controller-manager" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.396270 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e4d077-75a0-4e39-8fc2-da96ab91e36d" containerName="route-controller-manager" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.396623 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.408193 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt"] Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.529454 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbpcr\" (UniqueName: \"kubernetes.io/projected/19e4d077-75a0-4e39-8fc2-da96ab91e36d-kube-api-access-qbpcr\") pod \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.529495 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-client-ca\") pod \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.529554 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e4d077-75a0-4e39-8fc2-da96ab91e36d-serving-cert\") pod \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.529595 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-config\") pod \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\" (UID: \"19e4d077-75a0-4e39-8fc2-da96ab91e36d\") " Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.529755 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-config\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.529782 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-client-ca\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.529809 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/068b9847-19dc-4a62-849f-161f95935fe4-serving-cert\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.529855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nffnc\" (UniqueName: \"kubernetes.io/projected/068b9847-19dc-4a62-849f-161f95935fe4-kube-api-access-nffnc\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.540048 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-client-ca" (OuterVolumeSpecName: "client-ca") pod "19e4d077-75a0-4e39-8fc2-da96ab91e36d" (UID: "19e4d077-75a0-4e39-8fc2-da96ab91e36d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.541328 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-config" (OuterVolumeSpecName: "config") pod "19e4d077-75a0-4e39-8fc2-da96ab91e36d" (UID: "19e4d077-75a0-4e39-8fc2-da96ab91e36d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.630469 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-config\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.631008 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-client-ca\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.631122 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/068b9847-19dc-4a62-849f-161f95935fe4-serving-cert\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.631244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nffnc\" (UniqueName: \"kubernetes.io/projected/068b9847-19dc-4a62-849f-161f95935fe4-kube-api-access-nffnc\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.631365 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.631456 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e4d077-75a0-4e39-8fc2-da96ab91e36d-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.632811 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-config\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.633456 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-client-ca\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.638278 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/068b9847-19dc-4a62-849f-161f95935fe4-serving-cert\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.671198 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e4d077-75a0-4e39-8fc2-da96ab91e36d-kube-api-access-qbpcr" (OuterVolumeSpecName: "kube-api-access-qbpcr") pod "19e4d077-75a0-4e39-8fc2-da96ab91e36d" (UID: "19e4d077-75a0-4e39-8fc2-da96ab91e36d"). InnerVolumeSpecName "kube-api-access-qbpcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.673424 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e4d077-75a0-4e39-8fc2-da96ab91e36d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "19e4d077-75a0-4e39-8fc2-da96ab91e36d" (UID: "19e4d077-75a0-4e39-8fc2-da96ab91e36d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.690653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nffnc\" (UniqueName: \"kubernetes.io/projected/068b9847-19dc-4a62-849f-161f95935fe4-kube-api-access-nffnc\") pod \"route-controller-manager-769ff99c5c-2cdqt\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.709583 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.732417 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e4d077-75a0-4e39-8fc2-da96ab91e36d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.732444 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbpcr\" (UniqueName: \"kubernetes.io/projected/19e4d077-75a0-4e39-8fc2-da96ab91e36d-kube-api-access-qbpcr\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:31 crc kubenswrapper[4805]: I0216 20:59:31.888708 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.652391 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.652460 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.652465 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-94c47745d-csgw5" event={"ID":"34b821df-c32e-4da6-8887-1fcef1a5c6e0","Type":"ContainerDied","Data":"8fd3503d1a291b9bb458709bd9195bae5eecf88480bfc9bd817e27b953ecf570"} Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.652595 4805 scope.go:117] "RemoveContainer" containerID="fefab94f6585fee499f2b905f1ec1ca3354c0ecb8836aa9bd7d75150a6ac1153" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.659160 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.735255 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8zllg" podStartSLOduration=3.5715030580000002 podStartE2EDuration="39.735232588s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="2026-02-16 20:58:54.866294087 +0000 UTC m=+152.684977382" lastFinishedPulling="2026-02-16 20:59:31.030023617 +0000 UTC m=+188.848706912" observedRunningTime="2026-02-16 20:59:32.730484808 +0000 UTC m=+190.549168103" watchObservedRunningTime="2026-02-16 20:59:32.735232588 +0000 UTC m=+190.553915883" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.748712 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85"] Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.749268 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-proxy-ca-bundles\") pod \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.749367 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b821df-c32e-4da6-8887-1fcef1a5c6e0-serving-cert\") pod \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.749440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-client-ca\") pod \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.749485 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnbjr\" (UniqueName: \"kubernetes.io/projected/34b821df-c32e-4da6-8887-1fcef1a5c6e0-kube-api-access-bnbjr\") pod \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.749528 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-config\") pod \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\" (UID: \"34b821df-c32e-4da6-8887-1fcef1a5c6e0\") " Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.750839 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-config" (OuterVolumeSpecName: "config") pod "34b821df-c32e-4da6-8887-1fcef1a5c6e0" (UID: "34b821df-c32e-4da6-8887-1fcef1a5c6e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.751342 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "34b821df-c32e-4da6-8887-1fcef1a5c6e0" (UID: "34b821df-c32e-4da6-8887-1fcef1a5c6e0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.751411 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "34b821df-c32e-4da6-8887-1fcef1a5c6e0" (UID: "34b821df-c32e-4da6-8887-1fcef1a5c6e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.754382 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b821df-c32e-4da6-8887-1fcef1a5c6e0-kube-api-access-bnbjr" (OuterVolumeSpecName: "kube-api-access-bnbjr") pod "34b821df-c32e-4da6-8887-1fcef1a5c6e0" (UID: "34b821df-c32e-4da6-8887-1fcef1a5c6e0"). InnerVolumeSpecName "kube-api-access-bnbjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.754893 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6496f877d6-r9k85"] Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.759609 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b821df-c32e-4da6-8887-1fcef1a5c6e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "34b821df-c32e-4da6-8887-1fcef1a5c6e0" (UID: "34b821df-c32e-4da6-8887-1fcef1a5c6e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.850551 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnbjr\" (UniqueName: \"kubernetes.io/projected/34b821df-c32e-4da6-8887-1fcef1a5c6e0-kube-api-access-bnbjr\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.850589 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.850602 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.850615 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b821df-c32e-4da6-8887-1fcef1a5c6e0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.850627 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b821df-c32e-4da6-8887-1fcef1a5c6e0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.983010 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-94c47745d-csgw5"] Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.985522 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-94c47745d-csgw5"] Feb 16 20:59:32 crc kubenswrapper[4805]: I0216 20:59:32.987643 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt"] Feb 16 20:59:32 crc kubenswrapper[4805]: W0216 20:59:32.989880 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod068b9847_19dc_4a62_849f_161f95935fe4.slice/crio-8058b2dbba3bcc7a93124cca6452eaebd2e7787f452e85b5275647e2f313a933 WatchSource:0}: Error finding container 8058b2dbba3bcc7a93124cca6452eaebd2e7787f452e85b5275647e2f313a933: Status 404 returned error can't find the container with id 8058b2dbba3bcc7a93124cca6452eaebd2e7787f452e85b5275647e2f313a933 Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.410114 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.410397 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.603028 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e4d077-75a0-4e39-8fc2-da96ab91e36d" path="/var/lib/kubelet/pods/19e4d077-75a0-4e39-8fc2-da96ab91e36d/volumes" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.603678 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b821df-c32e-4da6-8887-1fcef1a5c6e0" path="/var/lib/kubelet/pods/34b821df-c32e-4da6-8887-1fcef1a5c6e0/volumes" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.604158 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.604181 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.660314 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrj84" event={"ID":"6bba21b2-c506-44e1-87e9-9ef5067ff1e5","Type":"ContainerStarted","Data":"bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec"} Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.663180 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" event={"ID":"068b9847-19dc-4a62-849f-161f95935fe4","Type":"ContainerStarted","Data":"7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb"} Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.663217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" event={"ID":"068b9847-19dc-4a62-849f-161f95935fe4","Type":"ContainerStarted","Data":"8058b2dbba3bcc7a93124cca6452eaebd2e7787f452e85b5275647e2f313a933"} Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.663505 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.672234 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7"] Feb 16 20:59:33 crc kubenswrapper[4805]: E0216 20:59:33.672439 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b821df-c32e-4da6-8887-1fcef1a5c6e0" containerName="controller-manager" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.672451 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b821df-c32e-4da6-8887-1fcef1a5c6e0" containerName="controller-manager" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.672556 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b821df-c32e-4da6-8887-1fcef1a5c6e0" containerName="controller-manager" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.672931 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.678951 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.679123 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.681990 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.682021 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.682143 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.682193 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.690018 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.692810 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrj84" podStartSLOduration=2.840498185 podStartE2EDuration="38.692794984s" podCreationTimestamp="2026-02-16 20:58:55 +0000 UTC" firstStartedPulling="2026-02-16 20:58:56.952278542 +0000 UTC m=+154.770961837" lastFinishedPulling="2026-02-16 20:59:32.804575331 +0000 UTC m=+190.623258636" observedRunningTime="2026-02-16 20:59:33.682440963 +0000 UTC m=+191.501124258" watchObservedRunningTime="2026-02-16 20:59:33.692794984 +0000 UTC m=+191.511478279" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.694829 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7"] Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.727404 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" podStartSLOduration=4.727387014 podStartE2EDuration="4.727387014s" podCreationTimestamp="2026-02-16 20:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:33.710532096 +0000 UTC m=+191.529215391" watchObservedRunningTime="2026-02-16 20:59:33.727387014 +0000 UTC m=+191.546070309" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.862324 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-client-ca\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.862395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-proxy-ca-bundles\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.862419 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2z2\" (UniqueName: \"kubernetes.io/projected/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-kube-api-access-zt2z2\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.862828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-config\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.863032 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-serving-cert\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.964480 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-client-ca\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.964546 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-proxy-ca-bundles\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.964568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt2z2\" (UniqueName: \"kubernetes.io/projected/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-kube-api-access-zt2z2\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.964619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-config\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.964667 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-serving-cert\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.965449 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-client-ca\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.966690 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-config\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.966836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-proxy-ca-bundles\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.969616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-serving-cert\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.985650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt2z2\" (UniqueName: \"kubernetes.io/projected/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-kube-api-access-zt2z2\") pod \"controller-manager-7bd9b9754c-vzdw7\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:33 crc kubenswrapper[4805]: I0216 20:59:33.987913 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.019319 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.019371 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.075688 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.157848 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.158440 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.164186 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.167488 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf88f505-3919-4584-a468-c19acc3a41c1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf88f505-3919-4584-a468-c19acc3a41c1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.167543 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf88f505-3919-4584-a468-c19acc3a41c1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf88f505-3919-4584-a468-c19acc3a41c1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.169677 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.173806 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.181101 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.189604 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.268235 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.269001 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf88f505-3919-4584-a468-c19acc3a41c1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf88f505-3919-4584-a468-c19acc3a41c1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.269112 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf88f505-3919-4584-a468-c19acc3a41c1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf88f505-3919-4584-a468-c19acc3a41c1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.269478 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf88f505-3919-4584-a468-c19acc3a41c1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf88f505-3919-4584-a468-c19acc3a41c1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.289498 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf88f505-3919-4584-a468-c19acc3a41c1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf88f505-3919-4584-a468-c19acc3a41c1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.474571 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.553692 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7"] Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.683461 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw567" event={"ID":"e1a06996-a3de-413f-b05e-852d5c0fa7ff","Type":"ContainerStarted","Data":"1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8"} Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.687593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnfbv" event={"ID":"db65d55c-51e3-4303-b819-3f92da3814d9","Type":"ContainerStarted","Data":"2a6988a09cb181e0e0b291344610eb3ec2934e54ac8a6c471b6843589ecbf18d"} Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.691060 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" event={"ID":"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a","Type":"ContainerStarted","Data":"be01a69b10b4841eee3a35772c4e3931627e590d83bd8bbaf047672a4c8e5eba"} Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.698278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7479h" event={"ID":"b5594412-8308-44f4-9f7e-15a2411d7f6a","Type":"ContainerStarted","Data":"91d42afafbdfe8e62fcf18fddec1a1ac3d5ba6d49fdb929a0f0722994ae37c7e"} Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.704291 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-878vf" event={"ID":"8509d47c-b1fc-473a-9252-6c50c7a630b7","Type":"ContainerStarted","Data":"08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55"} Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.726417 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mw567" podStartSLOduration=2.477178865 podStartE2EDuration="38.726400097s" podCreationTimestamp="2026-02-16 20:58:56 +0000 UTC" firstStartedPulling="2026-02-16 20:58:58.009872818 +0000 UTC m=+155.828556113" lastFinishedPulling="2026-02-16 20:59:34.25909405 +0000 UTC m=+192.077777345" observedRunningTime="2026-02-16 20:59:34.704398189 +0000 UTC m=+192.523081494" watchObservedRunningTime="2026-02-16 20:59:34.726400097 +0000 UTC m=+192.545083392" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.726754 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fnfbv" podStartSLOduration=2.360754642 podStartE2EDuration="41.726749036s" podCreationTimestamp="2026-02-16 20:58:53 +0000 UTC" firstStartedPulling="2026-02-16 20:58:54.855377531 +0000 UTC m=+152.674060826" lastFinishedPulling="2026-02-16 20:59:34.221371925 +0000 UTC m=+192.040055220" observedRunningTime="2026-02-16 20:59:34.723251542 +0000 UTC m=+192.541934837" watchObservedRunningTime="2026-02-16 20:59:34.726749036 +0000 UTC m=+192.545432321" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.742487 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7479h" podStartSLOduration=2.503022196 podStartE2EDuration="39.742472193s" podCreationTimestamp="2026-02-16 20:58:55 +0000 UTC" firstStartedPulling="2026-02-16 20:58:56.938654093 +0000 UTC m=+154.757337388" lastFinishedPulling="2026-02-16 20:59:34.1781041 +0000 UTC m=+191.996787385" observedRunningTime="2026-02-16 20:59:34.741560319 +0000 UTC m=+192.560243634" watchObservedRunningTime="2026-02-16 20:59:34.742472193 +0000 UTC m=+192.561155488" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.757422 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.764402 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-878vf" podStartSLOduration=3.5576402 podStartE2EDuration="38.764386639s" podCreationTimestamp="2026-02-16 20:58:56 +0000 UTC" firstStartedPulling="2026-02-16 20:58:59.031739141 +0000 UTC m=+156.850422436" lastFinishedPulling="2026-02-16 20:59:34.23848558 +0000 UTC m=+192.057168875" observedRunningTime="2026-02-16 20:59:34.763106974 +0000 UTC m=+192.581790269" watchObservedRunningTime="2026-02-16 20:59:34.764386639 +0000 UTC m=+192.583069934" Feb 16 20:59:34 crc kubenswrapper[4805]: I0216 20:59:34.768116 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bnj78" Feb 16 20:59:34 crc kubenswrapper[4805]: W0216 20:59:34.772760 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcf88f505_3919_4584_a468_c19acc3a41c1.slice/crio-c1ad141be303ba0041d0fa21c614f9eced5df62fe1706ecd07dc9be286aeb7a6 WatchSource:0}: Error finding container c1ad141be303ba0041d0fa21c614f9eced5df62fe1706ecd07dc9be286aeb7a6: Status 404 returned error can't find the container with id c1ad141be303ba0041d0fa21c614f9eced5df62fe1706ecd07dc9be286aeb7a6 Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.162119 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8zllg" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerName="registry-server" probeResult="failure" output=< Feb 16 20:59:35 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 20:59:35 crc kubenswrapper[4805]: > Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.402281 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.402325 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.711809 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" event={"ID":"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a","Type":"ContainerStarted","Data":"9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966"} Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.712057 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.714290 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf88f505-3919-4584-a468-c19acc3a41c1","Type":"ContainerStarted","Data":"21e6f5b603aebe0879718ba69a41ad53165d58af70a5814dd804d725ed371137"} Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.714750 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf88f505-3919-4584-a468-c19acc3a41c1","Type":"ContainerStarted","Data":"c1ad141be303ba0041d0fa21c614f9eced5df62fe1706ecd07dc9be286aeb7a6"} Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.717122 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.730240 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" podStartSLOduration=6.730216911 podStartE2EDuration="6.730216911s" podCreationTimestamp="2026-02-16 20:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:35.727585829 +0000 UTC m=+193.546269124" watchObservedRunningTime="2026-02-16 20:59:35.730216911 +0000 UTC m=+193.548900216" Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.751061 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.751036226 podStartE2EDuration="1.751036226s" podCreationTimestamp="2026-02-16 20:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:35.739821392 +0000 UTC m=+193.558504687" watchObservedRunningTime="2026-02-16 20:59:35.751036226 +0000 UTC m=+193.569719601" Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.828697 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:59:35 crc kubenswrapper[4805]: I0216 20:59:35.828969 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:59:36 crc kubenswrapper[4805]: I0216 20:59:36.440846 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hrj84" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerName="registry-server" probeResult="failure" output=< Feb 16 20:59:36 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 20:59:36 crc kubenswrapper[4805]: > Feb 16 20:59:36 crc kubenswrapper[4805]: I0216 20:59:36.654364 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:59:36 crc kubenswrapper[4805]: I0216 20:59:36.654446 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:59:36 crc kubenswrapper[4805]: I0216 20:59:36.722375 4805 generic.go:334] "Generic (PLEG): container finished" podID="cf88f505-3919-4584-a468-c19acc3a41c1" containerID="21e6f5b603aebe0879718ba69a41ad53165d58af70a5814dd804d725ed371137" exitCode=0 Feb 16 20:59:36 crc kubenswrapper[4805]: I0216 20:59:36.722536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf88f505-3919-4584-a468-c19acc3a41c1","Type":"ContainerDied","Data":"21e6f5b603aebe0879718ba69a41ad53165d58af70a5814dd804d725ed371137"} Feb 16 20:59:36 crc kubenswrapper[4805]: I0216 20:59:36.882106 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-7479h" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerName="registry-server" probeResult="failure" output=< Feb 16 20:59:36 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 20:59:36 crc kubenswrapper[4805]: > Feb 16 20:59:37 crc kubenswrapper[4805]: I0216 20:59:37.049775 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:59:37 crc kubenswrapper[4805]: I0216 20:59:37.049954 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:59:37 crc kubenswrapper[4805]: I0216 20:59:37.695987 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mw567" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerName="registry-server" probeResult="failure" output=< Feb 16 20:59:37 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 20:59:37 crc kubenswrapper[4805]: > Feb 16 20:59:37 crc kubenswrapper[4805]: I0216 20:59:37.994273 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.081926 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-878vf" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerName="registry-server" probeResult="failure" output=< Feb 16 20:59:38 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 20:59:38 crc kubenswrapper[4805]: > Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.099473 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.099529 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.115547 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf88f505-3919-4584-a468-c19acc3a41c1-kubelet-dir\") pod \"cf88f505-3919-4584-a468-c19acc3a41c1\" (UID: \"cf88f505-3919-4584-a468-c19acc3a41c1\") " Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.115620 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf88f505-3919-4584-a468-c19acc3a41c1-kube-api-access\") pod \"cf88f505-3919-4584-a468-c19acc3a41c1\" (UID: \"cf88f505-3919-4584-a468-c19acc3a41c1\") " Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.115666 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf88f505-3919-4584-a468-c19acc3a41c1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf88f505-3919-4584-a468-c19acc3a41c1" (UID: "cf88f505-3919-4584-a468-c19acc3a41c1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.115923 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf88f505-3919-4584-a468-c19acc3a41c1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.122244 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf88f505-3919-4584-a468-c19acc3a41c1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf88f505-3919-4584-a468-c19acc3a41c1" (UID: "cf88f505-3919-4584-a468-c19acc3a41c1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.217272 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf88f505-3919-4584-a468-c19acc3a41c1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.734381 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf88f505-3919-4584-a468-c19acc3a41c1","Type":"ContainerDied","Data":"c1ad141be303ba0041d0fa21c614f9eced5df62fe1706ecd07dc9be286aeb7a6"} Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.734417 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1ad141be303ba0041d0fa21c614f9eced5df62fe1706ecd07dc9be286aeb7a6" Feb 16 20:59:38 crc kubenswrapper[4805]: I0216 20:59:38.734459 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.359979 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 20:59:41 crc kubenswrapper[4805]: E0216 20:59:41.360898 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf88f505-3919-4584-a468-c19acc3a41c1" containerName="pruner" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.360928 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf88f505-3919-4584-a468-c19acc3a41c1" containerName="pruner" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.361082 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf88f505-3919-4584-a468-c19acc3a41c1" containerName="pruner" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.361653 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.364401 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.364480 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.368671 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.556684 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-var-lock\") pod \"installer-9-crc\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.556783 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c83051b-f772-4ad8-8e02-8d51a3386b25-kube-api-access\") pod \"installer-9-crc\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.556837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.657985 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.658037 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-var-lock\") pod \"installer-9-crc\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.658086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c83051b-f772-4ad8-8e02-8d51a3386b25-kube-api-access\") pod \"installer-9-crc\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.658093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.658244 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-var-lock\") pod \"installer-9-crc\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.683963 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c83051b-f772-4ad8-8e02-8d51a3386b25-kube-api-access\") pod \"installer-9-crc\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:41 crc kubenswrapper[4805]: I0216 20:59:41.977243 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 20:59:42 crc kubenswrapper[4805]: I0216 20:59:42.456239 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 20:59:42 crc kubenswrapper[4805]: I0216 20:59:42.757620 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5c83051b-f772-4ad8-8e02-8d51a3386b25","Type":"ContainerStarted","Data":"768dc565a32dd50c968c4f5a86e1a709e67eca7ebba9b0ed66abbedd5518da0b"} Feb 16 20:59:43 crc kubenswrapper[4805]: I0216 20:59:43.767086 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5c83051b-f772-4ad8-8e02-8d51a3386b25","Type":"ContainerStarted","Data":"e9be8888058702baef2fe6313925163e3b6517720ea66990bfb1fa98a74b5ca0"} Feb 16 20:59:43 crc kubenswrapper[4805]: I0216 20:59:43.803552 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.80351507 podStartE2EDuration="2.80351507s" podCreationTimestamp="2026-02-16 20:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:43.793680113 +0000 UTC m=+201.612363448" watchObservedRunningTime="2026-02-16 20:59:43.80351507 +0000 UTC m=+201.622198405" Feb 16 20:59:43 crc kubenswrapper[4805]: I0216 20:59:43.831310 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:59:43 crc kubenswrapper[4805]: I0216 20:59:43.831427 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:59:43 crc kubenswrapper[4805]: I0216 20:59:43.898635 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:59:44 crc kubenswrapper[4805]: I0216 20:59:44.092631 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:59:44 crc kubenswrapper[4805]: I0216 20:59:44.165941 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:59:44 crc kubenswrapper[4805]: I0216 20:59:44.839285 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:59:45 crc kubenswrapper[4805]: I0216 20:59:45.457567 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:59:45 crc kubenswrapper[4805]: I0216 20:59:45.512361 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 20:59:45 crc kubenswrapper[4805]: I0216 20:59:45.882997 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:59:45 crc kubenswrapper[4805]: I0216 20:59:45.934146 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:59:46 crc kubenswrapper[4805]: I0216 20:59:46.717512 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:59:46 crc kubenswrapper[4805]: I0216 20:59:46.783029 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mw567" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.089993 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8zllg"] Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.090362 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8zllg" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerName="registry-server" containerID="cri-o://c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141" gracePeriod=2 Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.124651 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.183679 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.661660 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.805268 4805 generic.go:334] "Generic (PLEG): container finished" podID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerID="c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141" exitCode=0 Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.805370 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zllg" event={"ID":"c82acaa8-7c4e-40ea-985f-e5cc3faa0475","Type":"ContainerDied","Data":"c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141"} Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.805460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zllg" event={"ID":"c82acaa8-7c4e-40ea-985f-e5cc3faa0475","Type":"ContainerDied","Data":"46cc68863ad9c63d16ea82bf3fbc03c1ac6db94f10d0f17858a2ebb1a99b92b9"} Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.805495 4805 scope.go:117] "RemoveContainer" containerID="c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.805507 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zllg" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.833705 4805 scope.go:117] "RemoveContainer" containerID="4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.850282 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-catalog-content\") pod \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.850416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr6gz\" (UniqueName: \"kubernetes.io/projected/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-kube-api-access-vr6gz\") pod \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.850582 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-utilities\") pod \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\" (UID: \"c82acaa8-7c4e-40ea-985f-e5cc3faa0475\") " Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.853151 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-utilities" (OuterVolumeSpecName: "utilities") pod "c82acaa8-7c4e-40ea-985f-e5cc3faa0475" (UID: "c82acaa8-7c4e-40ea-985f-e5cc3faa0475"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.859582 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-kube-api-access-vr6gz" (OuterVolumeSpecName: "kube-api-access-vr6gz") pod "c82acaa8-7c4e-40ea-985f-e5cc3faa0475" (UID: "c82acaa8-7c4e-40ea-985f-e5cc3faa0475"). InnerVolumeSpecName "kube-api-access-vr6gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.860337 4805 scope.go:117] "RemoveContainer" containerID="80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.923146 4805 scope.go:117] "RemoveContainer" containerID="c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141" Feb 16 20:59:47 crc kubenswrapper[4805]: E0216 20:59:47.924010 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141\": container with ID starting with c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141 not found: ID does not exist" containerID="c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.924081 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141"} err="failed to get container status \"c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141\": rpc error: code = NotFound desc = could not find container \"c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141\": container with ID starting with c56a11b800fbc8f5e4141518b8eceeb79045ff3dcc5a5138ec4ccee890e58141 not found: ID does not exist" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.924159 4805 scope.go:117] "RemoveContainer" containerID="4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4" Feb 16 20:59:47 crc kubenswrapper[4805]: E0216 20:59:47.924912 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4\": container with ID starting with 4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4 not found: ID does not exist" containerID="4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.924974 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4"} err="failed to get container status \"4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4\": rpc error: code = NotFound desc = could not find container \"4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4\": container with ID starting with 4cb7daf1682a37012409eba48d9df1883f6b0565381eef4e2586b1ac896c78b4 not found: ID does not exist" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.925015 4805 scope.go:117] "RemoveContainer" containerID="80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0" Feb 16 20:59:47 crc kubenswrapper[4805]: E0216 20:59:47.925563 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0\": container with ID starting with 80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0 not found: ID does not exist" containerID="80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.925604 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0"} err="failed to get container status \"80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0\": rpc error: code = NotFound desc = could not find container \"80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0\": container with ID starting with 80f25cdb04e38bdc41004cfc2a985131221f9c97898159730ff92e921b56aee0 not found: ID does not exist" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.939416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c82acaa8-7c4e-40ea-985f-e5cc3faa0475" (UID: "c82acaa8-7c4e-40ea-985f-e5cc3faa0475"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.952293 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.952326 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:47 crc kubenswrapper[4805]: I0216 20:59:47.952344 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr6gz\" (UniqueName: \"kubernetes.io/projected/c82acaa8-7c4e-40ea-985f-e5cc3faa0475-kube-api-access-vr6gz\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:48 crc kubenswrapper[4805]: I0216 20:59:48.135462 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8zllg"] Feb 16 20:59:48 crc kubenswrapper[4805]: I0216 20:59:48.142057 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8zllg"] Feb 16 20:59:48 crc kubenswrapper[4805]: I0216 20:59:48.489185 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7479h"] Feb 16 20:59:48 crc kubenswrapper[4805]: I0216 20:59:48.489520 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7479h" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerName="registry-server" containerID="cri-o://91d42afafbdfe8e62fcf18fddec1a1ac3d5ba6d49fdb929a0f0722994ae37c7e" gracePeriod=2 Feb 16 20:59:48 crc kubenswrapper[4805]: I0216 20:59:48.816634 4805 generic.go:334] "Generic (PLEG): container finished" podID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerID="91d42afafbdfe8e62fcf18fddec1a1ac3d5ba6d49fdb929a0f0722994ae37c7e" exitCode=0 Feb 16 20:59:48 crc kubenswrapper[4805]: I0216 20:59:48.816715 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7479h" event={"ID":"b5594412-8308-44f4-9f7e-15a2411d7f6a","Type":"ContainerDied","Data":"91d42afafbdfe8e62fcf18fddec1a1ac3d5ba6d49fdb929a0f0722994ae37c7e"} Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.039342 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.170229 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8jhq\" (UniqueName: \"kubernetes.io/projected/b5594412-8308-44f4-9f7e-15a2411d7f6a-kube-api-access-t8jhq\") pod \"b5594412-8308-44f4-9f7e-15a2411d7f6a\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.170363 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-utilities\") pod \"b5594412-8308-44f4-9f7e-15a2411d7f6a\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.170405 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-catalog-content\") pod \"b5594412-8308-44f4-9f7e-15a2411d7f6a\" (UID: \"b5594412-8308-44f4-9f7e-15a2411d7f6a\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.171450 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-utilities" (OuterVolumeSpecName: "utilities") pod "b5594412-8308-44f4-9f7e-15a2411d7f6a" (UID: "b5594412-8308-44f4-9f7e-15a2411d7f6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.176232 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5594412-8308-44f4-9f7e-15a2411d7f6a-kube-api-access-t8jhq" (OuterVolumeSpecName: "kube-api-access-t8jhq") pod "b5594412-8308-44f4-9f7e-15a2411d7f6a" (UID: "b5594412-8308-44f4-9f7e-15a2411d7f6a"). InnerVolumeSpecName "kube-api-access-t8jhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.190865 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7"] Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.191290 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" podUID="14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" containerName="controller-manager" containerID="cri-o://9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966" gracePeriod=30 Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.201597 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5594412-8308-44f4-9f7e-15a2411d7f6a" (UID: "b5594412-8308-44f4-9f7e-15a2411d7f6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.215014 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt"] Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.215479 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" podUID="068b9847-19dc-4a62-849f-161f95935fe4" containerName="route-controller-manager" containerID="cri-o://7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb" gracePeriod=30 Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.271360 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.271653 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5594412-8308-44f4-9f7e-15a2411d7f6a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.271798 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8jhq\" (UniqueName: \"kubernetes.io/projected/b5594412-8308-44f4-9f7e-15a2411d7f6a-kube-api-access-t8jhq\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.274346 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" podUID="97145a00-5917-496b-8eaa-48da22c29d3d" containerName="oauth-openshift" containerID="cri-o://e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9" gracePeriod=15 Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.487544 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnfbv"] Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.487882 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fnfbv" podUID="db65d55c-51e3-4303-b819-3f92da3814d9" containerName="registry-server" containerID="cri-o://2a6988a09cb181e0e0b291344610eb3ec2934e54ac8a6c471b6843589ecbf18d" gracePeriod=2 Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.606994 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" path="/var/lib/kubelet/pods/c82acaa8-7c4e-40ea-985f-e5cc3faa0475/volumes" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.783915 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.797770 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.823411 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.828320 4805 generic.go:334] "Generic (PLEG): container finished" podID="068b9847-19dc-4a62-849f-161f95935fe4" containerID="7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb" exitCode=0 Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.828394 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" event={"ID":"068b9847-19dc-4a62-849f-161f95935fe4","Type":"ContainerDied","Data":"7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb"} Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.828402 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.828463 4805 scope.go:117] "RemoveContainer" containerID="7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.828443 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt" event={"ID":"068b9847-19dc-4a62-849f-161f95935fe4","Type":"ContainerDied","Data":"8058b2dbba3bcc7a93124cca6452eaebd2e7787f452e85b5275647e2f313a933"} Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.831034 4805 generic.go:334] "Generic (PLEG): container finished" podID="db65d55c-51e3-4303-b819-3f92da3814d9" containerID="2a6988a09cb181e0e0b291344610eb3ec2934e54ac8a6c471b6843589ecbf18d" exitCode=0 Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.831086 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnfbv" event={"ID":"db65d55c-51e3-4303-b819-3f92da3814d9","Type":"ContainerDied","Data":"2a6988a09cb181e0e0b291344610eb3ec2934e54ac8a6c471b6843589ecbf18d"} Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.832203 4805 generic.go:334] "Generic (PLEG): container finished" podID="14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" containerID="9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966" exitCode=0 Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.832243 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.832250 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" event={"ID":"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a","Type":"ContainerDied","Data":"9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966"} Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.832302 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7" event={"ID":"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a","Type":"ContainerDied","Data":"be01a69b10b4841eee3a35772c4e3931627e590d83bd8bbaf047672a4c8e5eba"} Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.835353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7479h" event={"ID":"b5594412-8308-44f4-9f7e-15a2411d7f6a","Type":"ContainerDied","Data":"fe57cbdc3191c925e727ecfd9d041d093406d905f7098d6375a68ad2925ad524"} Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.835397 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7479h" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.837332 4805 generic.go:334] "Generic (PLEG): container finished" podID="97145a00-5917-496b-8eaa-48da22c29d3d" containerID="e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9" exitCode=0 Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.837364 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" event={"ID":"97145a00-5917-496b-8eaa-48da22c29d3d","Type":"ContainerDied","Data":"e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9"} Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.837388 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" event={"ID":"97145a00-5917-496b-8eaa-48da22c29d3d","Type":"ContainerDied","Data":"31e5d2fbbb551aa218d6d4e226fe26fc9ddde4586084b9b6b7c3e8c1a7199fab"} Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.837432 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wwm8v" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.860625 4805 scope.go:117] "RemoveContainer" containerID="7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb" Feb 16 20:59:49 crc kubenswrapper[4805]: E0216 20:59:49.861053 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb\": container with ID starting with 7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb not found: ID does not exist" containerID="7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.861086 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb"} err="failed to get container status \"7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb\": rpc error: code = NotFound desc = could not find container \"7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb\": container with ID starting with 7dbadf29c57131379e23fca6499d66a449be6cf893894dbe29d0550226d363bb not found: ID does not exist" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.861106 4805 scope.go:117] "RemoveContainer" containerID="9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.870984 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7479h"] Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.874845 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7479h"] Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.874926 4805 scope.go:117] "RemoveContainer" containerID="9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966" Feb 16 20:59:49 crc kubenswrapper[4805]: E0216 20:59:49.875439 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966\": container with ID starting with 9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966 not found: ID does not exist" containerID="9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.875478 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966"} err="failed to get container status \"9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966\": rpc error: code = NotFound desc = could not find container \"9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966\": container with ID starting with 9bb8e613a1ebb90707e16f823d01257655f5ee09d41d641665eb3acd88e9d966 not found: ID does not exist" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.875508 4805 scope.go:117] "RemoveContainer" containerID="91d42afafbdfe8e62fcf18fddec1a1ac3d5ba6d49fdb929a0f0722994ae37c7e" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.880940 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-session\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.880992 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-audit-policies\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881025 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97145a00-5917-496b-8eaa-48da22c29d3d-audit-dir\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881045 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-router-certs\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881082 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-service-ca\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881104 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-login\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881151 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-idp-0-file-data\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881172 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-trusted-ca-bundle\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881207 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-error\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881232 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-cliconfig\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881254 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dlsr\" (UniqueName: \"kubernetes.io/projected/97145a00-5917-496b-8eaa-48da22c29d3d-kube-api-access-9dlsr\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881276 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-provider-selection\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881302 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-ocp-branding-template\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.881322 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-serving-cert\") pod \"97145a00-5917-496b-8eaa-48da22c29d3d\" (UID: \"97145a00-5917-496b-8eaa-48da22c29d3d\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.882662 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.883200 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.886363 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.886610 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.886930 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97145a00-5917-496b-8eaa-48da22c29d3d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.887091 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.887337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.887661 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.890862 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.898084 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.898502 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.899427 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.900109 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97145a00-5917-496b-8eaa-48da22c29d3d-kube-api-access-9dlsr" (OuterVolumeSpecName: "kube-api-access-9dlsr") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "kube-api-access-9dlsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.900156 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "97145a00-5917-496b-8eaa-48da22c29d3d" (UID: "97145a00-5917-496b-8eaa-48da22c29d3d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.905239 4805 scope.go:117] "RemoveContainer" containerID="fffcf0ea05883de8e9fc2590a296e807ee3f6f07beed74b4755707cfe0356c81" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.917509 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.920759 4805 scope.go:117] "RemoveContainer" containerID="7ea736aa870a4b4ec2ddaf1f40b370e6fbd12643039e373c19284fe835dbd94f" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.941649 4805 scope.go:117] "RemoveContainer" containerID="e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.956397 4805 scope.go:117] "RemoveContainer" containerID="e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9" Feb 16 20:59:49 crc kubenswrapper[4805]: E0216 20:59:49.956896 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9\": container with ID starting with e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9 not found: ID does not exist" containerID="e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.956933 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9"} err="failed to get container status \"e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9\": rpc error: code = NotFound desc = could not find container \"e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9\": container with ID starting with e75fb7620f417e958bd1b20331dd79368a33a60f0ce7cb10e544c506f045bed9 not found: ID does not exist" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.982607 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-serving-cert\") pod \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.982774 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nffnc\" (UniqueName: \"kubernetes.io/projected/068b9847-19dc-4a62-849f-161f95935fe4-kube-api-access-nffnc\") pod \"068b9847-19dc-4a62-849f-161f95935fe4\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.982867 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-client-ca\") pod \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.982952 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-config\") pod \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983054 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-client-ca\") pod \"068b9847-19dc-4a62-849f-161f95935fe4\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983141 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt2z2\" (UniqueName: \"kubernetes.io/projected/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-kube-api-access-zt2z2\") pod \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983177 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/068b9847-19dc-4a62-849f-161f95935fe4-serving-cert\") pod \"068b9847-19dc-4a62-849f-161f95935fe4\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983252 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-config\") pod \"068b9847-19dc-4a62-849f-161f95935fe4\" (UID: \"068b9847-19dc-4a62-849f-161f95935fe4\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983317 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-proxy-ca-bundles\") pod \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\" (UID: \"14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a\") " Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983642 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983658 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983675 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983689 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983758 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dlsr\" (UniqueName: \"kubernetes.io/projected/97145a00-5917-496b-8eaa-48da22c29d3d-kube-api-access-9dlsr\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983778 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983796 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983846 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983861 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-config" (OuterVolumeSpecName: "config") pod "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" (UID: "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983875 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983923 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97145a00-5917-496b-8eaa-48da22c29d3d-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983937 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983950 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.983961 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97145a00-5917-496b-8eaa-48da22c29d3d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.984259 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-client-ca" (OuterVolumeSpecName: "client-ca") pod "068b9847-19dc-4a62-849f-161f95935fe4" (UID: "068b9847-19dc-4a62-849f-161f95935fe4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.984293 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-config" (OuterVolumeSpecName: "config") pod "068b9847-19dc-4a62-849f-161f95935fe4" (UID: "068b9847-19dc-4a62-849f-161f95935fe4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.985059 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-client-ca" (OuterVolumeSpecName: "client-ca") pod "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" (UID: "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.985708 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" (UID: "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.986198 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068b9847-19dc-4a62-849f-161f95935fe4-kube-api-access-nffnc" (OuterVolumeSpecName: "kube-api-access-nffnc") pod "068b9847-19dc-4a62-849f-161f95935fe4" (UID: "068b9847-19dc-4a62-849f-161f95935fe4"). InnerVolumeSpecName "kube-api-access-nffnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.986270 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-kube-api-access-zt2z2" (OuterVolumeSpecName: "kube-api-access-zt2z2") pod "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" (UID: "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a"). InnerVolumeSpecName "kube-api-access-zt2z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.986625 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068b9847-19dc-4a62-849f-161f95935fe4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "068b9847-19dc-4a62-849f-161f95935fe4" (UID: "068b9847-19dc-4a62-849f-161f95935fe4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:49 crc kubenswrapper[4805]: I0216 20:59:49.988209 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" (UID: "14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.084915 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-utilities\") pod \"db65d55c-51e3-4303-b819-3f92da3814d9\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.084958 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9gd7\" (UniqueName: \"kubernetes.io/projected/db65d55c-51e3-4303-b819-3f92da3814d9-kube-api-access-k9gd7\") pod \"db65d55c-51e3-4303-b819-3f92da3814d9\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085057 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-catalog-content\") pod \"db65d55c-51e3-4303-b819-3f92da3814d9\" (UID: \"db65d55c-51e3-4303-b819-3f92da3814d9\") " Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085263 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085275 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085285 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nffnc\" (UniqueName: \"kubernetes.io/projected/068b9847-19dc-4a62-849f-161f95935fe4-kube-api-access-nffnc\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085297 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085305 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085313 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085321 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt2z2\" (UniqueName: \"kubernetes.io/projected/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a-kube-api-access-zt2z2\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085330 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/068b9847-19dc-4a62-849f-161f95935fe4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.085340 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068b9847-19dc-4a62-849f-161f95935fe4-config\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.087427 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-utilities" (OuterVolumeSpecName: "utilities") pod "db65d55c-51e3-4303-b819-3f92da3814d9" (UID: "db65d55c-51e3-4303-b819-3f92da3814d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.093381 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db65d55c-51e3-4303-b819-3f92da3814d9-kube-api-access-k9gd7" (OuterVolumeSpecName: "kube-api-access-k9gd7") pod "db65d55c-51e3-4303-b819-3f92da3814d9" (UID: "db65d55c-51e3-4303-b819-3f92da3814d9"). InnerVolumeSpecName "kube-api-access-k9gd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.182161 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.187888 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db65d55c-51e3-4303-b819-3f92da3814d9" (UID: "db65d55c-51e3-4303-b819-3f92da3814d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.188080 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.188132 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9gd7\" (UniqueName: \"kubernetes.io/projected/db65d55c-51e3-4303-b819-3f92da3814d9-kube-api-access-k9gd7\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.188157 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d55c-51e3-4303-b819-3f92da3814d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.189285 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9b9754c-vzdw7"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.217140 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wwm8v"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.226107 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wwm8v"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.235260 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.241848 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-769ff99c5c-2cdqt"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693053 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c4769f759-nz8ts"] Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693355 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97145a00-5917-496b-8eaa-48da22c29d3d" containerName="oauth-openshift" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693377 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="97145a00-5917-496b-8eaa-48da22c29d3d" containerName="oauth-openshift" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693397 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db65d55c-51e3-4303-b819-3f92da3814d9" containerName="registry-server" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693409 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="db65d55c-51e3-4303-b819-3f92da3814d9" containerName="registry-server" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693428 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" containerName="controller-manager" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693442 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" containerName="controller-manager" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693460 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerName="extract-utilities" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693471 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerName="extract-utilities" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693495 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db65d55c-51e3-4303-b819-3f92da3814d9" containerName="extract-utilities" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693508 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="db65d55c-51e3-4303-b819-3f92da3814d9" containerName="extract-utilities" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693528 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerName="extract-content" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693539 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerName="extract-content" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693552 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068b9847-19dc-4a62-849f-161f95935fe4" containerName="route-controller-manager" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693564 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="068b9847-19dc-4a62-849f-161f95935fe4" containerName="route-controller-manager" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693582 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerName="extract-utilities" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693594 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerName="extract-utilities" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693610 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerName="extract-content" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693622 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerName="extract-content" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693641 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerName="registry-server" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693652 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerName="registry-server" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693667 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerName="registry-server" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693678 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerName="registry-server" Feb 16 20:59:50 crc kubenswrapper[4805]: E0216 20:59:50.693690 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db65d55c-51e3-4303-b819-3f92da3814d9" containerName="extract-content" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693701 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="db65d55c-51e3-4303-b819-3f92da3814d9" containerName="extract-content" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693903 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82acaa8-7c4e-40ea-985f-e5cc3faa0475" containerName="registry-server" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693924 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" containerName="controller-manager" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693939 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" containerName="registry-server" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693950 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="db65d55c-51e3-4303-b819-3f92da3814d9" containerName="registry-server" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693964 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="068b9847-19dc-4a62-849f-161f95935fe4" containerName="route-controller-manager" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.693983 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="97145a00-5917-496b-8eaa-48da22c29d3d" containerName="oauth-openshift" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.694501 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.696448 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.697809 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.701674 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.701804 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.701702 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.710288 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.717021 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.720538 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.725956 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.726406 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.726596 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.726762 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.727044 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.727190 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.731311 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.734087 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c4769f759-nz8ts"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.742029 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.805628 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-client-ca\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.805675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-proxy-ca-bundles\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.805697 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-serving-cert\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.805748 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-config\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.806637 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nqf\" (UniqueName: \"kubernetes.io/projected/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-kube-api-access-r7nqf\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.843446 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnfbv" event={"ID":"db65d55c-51e3-4303-b819-3f92da3814d9","Type":"ContainerDied","Data":"d7589cbc694cb37b31619a0697b72a7e3b61b19cdf4372be03ba07b98c064cbc"} Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.843475 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnfbv" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.843503 4805 scope.go:117] "RemoveContainer" containerID="2a6988a09cb181e0e0b291344610eb3ec2934e54ac8a6c471b6843589ecbf18d" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.861910 4805 scope.go:117] "RemoveContainer" containerID="653ac299c2bf3639c6c9f34dd612afee851bf3d7a2d9e5dac4360b977aacb3fa" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.870996 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnfbv"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.873870 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fnfbv"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.885104 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-878vf"] Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.885461 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-878vf" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerName="registry-server" containerID="cri-o://08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55" gracePeriod=2 Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.894697 4805 scope.go:117] "RemoveContainer" containerID="b83991a1d331944214b2309721a5910725e369af1f19c5397bc7a92b610e777b" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.908291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-serving-cert\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.908340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btfxv\" (UniqueName: \"kubernetes.io/projected/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-kube-api-access-btfxv\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.908380 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-config\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.908418 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7nqf\" (UniqueName: \"kubernetes.io/projected/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-kube-api-access-r7nqf\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.908453 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-config\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.908472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-client-ca\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.908490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-proxy-ca-bundles\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.908510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-client-ca\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.908527 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-serving-cert\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.910316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-client-ca\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.911188 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-config\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.911476 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-proxy-ca-bundles\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.912561 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-serving-cert\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:50 crc kubenswrapper[4805]: I0216 20:59:50.943884 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7nqf\" (UniqueName: \"kubernetes.io/projected/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-kube-api-access-r7nqf\") pod \"controller-manager-7c4769f759-nz8ts\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.009918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-config\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.009973 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-client-ca\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.010005 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-serving-cert\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.010028 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btfxv\" (UniqueName: \"kubernetes.io/projected/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-kube-api-access-btfxv\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.011069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-client-ca\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.011206 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-config\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.014910 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-serving-cert\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.023171 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.029225 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btfxv\" (UniqueName: \"kubernetes.io/projected/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-kube-api-access-btfxv\") pod \"route-controller-manager-84b8b8c6fc-85dgk\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.046483 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.239430 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.414603 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-utilities\") pod \"8509d47c-b1fc-473a-9252-6c50c7a630b7\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.415142 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvh2q\" (UniqueName: \"kubernetes.io/projected/8509d47c-b1fc-473a-9252-6c50c7a630b7-kube-api-access-dvh2q\") pod \"8509d47c-b1fc-473a-9252-6c50c7a630b7\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.415234 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-catalog-content\") pod \"8509d47c-b1fc-473a-9252-6c50c7a630b7\" (UID: \"8509d47c-b1fc-473a-9252-6c50c7a630b7\") " Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.415686 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-utilities" (OuterVolumeSpecName: "utilities") pod "8509d47c-b1fc-473a-9252-6c50c7a630b7" (UID: "8509d47c-b1fc-473a-9252-6c50c7a630b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.422957 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8509d47c-b1fc-473a-9252-6c50c7a630b7-kube-api-access-dvh2q" (OuterVolumeSpecName: "kube-api-access-dvh2q") pod "8509d47c-b1fc-473a-9252-6c50c7a630b7" (UID: "8509d47c-b1fc-473a-9252-6c50c7a630b7"). InnerVolumeSpecName "kube-api-access-dvh2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.447324 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c4769f759-nz8ts"] Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.516448 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.516484 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvh2q\" (UniqueName: \"kubernetes.io/projected/8509d47c-b1fc-473a-9252-6c50c7a630b7-kube-api-access-dvh2q\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.537335 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk"] Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.548805 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8509d47c-b1fc-473a-9252-6c50c7a630b7" (UID: "8509d47c-b1fc-473a-9252-6c50c7a630b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:59:51 crc kubenswrapper[4805]: W0216 20:59:51.555027 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fe50339_ce37_4e95_a7d3_cbb42f07e5b2.slice/crio-7a4a8e9b8f4f108913ec2d89bd490338dfb3435a7172bcd96a20ecd33c261cf2 WatchSource:0}: Error finding container 7a4a8e9b8f4f108913ec2d89bd490338dfb3435a7172bcd96a20ecd33c261cf2: Status 404 returned error can't find the container with id 7a4a8e9b8f4f108913ec2d89bd490338dfb3435a7172bcd96a20ecd33c261cf2 Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.608406 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068b9847-19dc-4a62-849f-161f95935fe4" path="/var/lib/kubelet/pods/068b9847-19dc-4a62-849f-161f95935fe4/volumes" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.609976 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a" path="/var/lib/kubelet/pods/14b84b5f-73d8-4c5b-9ba5-9f8b6db3134a/volumes" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.610940 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97145a00-5917-496b-8eaa-48da22c29d3d" path="/var/lib/kubelet/pods/97145a00-5917-496b-8eaa-48da22c29d3d/volumes" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.617789 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5594412-8308-44f4-9f7e-15a2411d7f6a" path="/var/lib/kubelet/pods/b5594412-8308-44f4-9f7e-15a2411d7f6a/volumes" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.617925 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8509d47c-b1fc-473a-9252-6c50c7a630b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.618750 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db65d55c-51e3-4303-b819-3f92da3814d9" path="/var/lib/kubelet/pods/db65d55c-51e3-4303-b819-3f92da3814d9/volumes" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.857936 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" event={"ID":"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2","Type":"ContainerStarted","Data":"2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0"} Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.858010 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" event={"ID":"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2","Type":"ContainerStarted","Data":"7a4a8e9b8f4f108913ec2d89bd490338dfb3435a7172bcd96a20ecd33c261cf2"} Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.858245 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.862702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" event={"ID":"49eb0e50-c6ad-4813-84c1-4ae52f3255c8","Type":"ContainerStarted","Data":"8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad"} Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.862766 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" event={"ID":"49eb0e50-c6ad-4813-84c1-4ae52f3255c8","Type":"ContainerStarted","Data":"a986bf941226ecaa9079e67369c5b0566a74260380cbc478eb68a7dbbcc2ed5f"} Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.863172 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.865800 4805 generic.go:334] "Generic (PLEG): container finished" podID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerID="08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55" exitCode=0 Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.865857 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-878vf" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.865881 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-878vf" event={"ID":"8509d47c-b1fc-473a-9252-6c50c7a630b7","Type":"ContainerDied","Data":"08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55"} Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.865932 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-878vf" event={"ID":"8509d47c-b1fc-473a-9252-6c50c7a630b7","Type":"ContainerDied","Data":"bf17409b8513c8a106b6cec9ec5a098d2bbce9f606bee4c40cd32b1af53bd83c"} Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.865963 4805 scope.go:117] "RemoveContainer" containerID="08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.880122 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.893200 4805 scope.go:117] "RemoveContainer" containerID="be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.907914 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" podStartSLOduration=2.907897064 podStartE2EDuration="2.907897064s" podCreationTimestamp="2026-02-16 20:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:51.906485706 +0000 UTC m=+209.725169001" watchObservedRunningTime="2026-02-16 20:59:51.907897064 +0000 UTC m=+209.726580359" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.909795 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" podStartSLOduration=2.909788116 podStartE2EDuration="2.909788116s" podCreationTimestamp="2026-02-16 20:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:51.891442237 +0000 UTC m=+209.710125522" watchObservedRunningTime="2026-02-16 20:59:51.909788116 +0000 UTC m=+209.728471411" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.915440 4805 scope.go:117] "RemoveContainer" containerID="10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.932403 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-878vf"] Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.936960 4805 scope.go:117] "RemoveContainer" containerID="08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55" Feb 16 20:59:51 crc kubenswrapper[4805]: E0216 20:59:51.937462 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55\": container with ID starting with 08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55 not found: ID does not exist" containerID="08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.937507 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55"} err="failed to get container status \"08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55\": rpc error: code = NotFound desc = could not find container \"08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55\": container with ID starting with 08456eb624a568f1eac38d8e89a70ac6bd75b7e87d4d2f54c61810d68cbd6e55 not found: ID does not exist" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.937538 4805 scope.go:117] "RemoveContainer" containerID="be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca" Feb 16 20:59:51 crc kubenswrapper[4805]: E0216 20:59:51.937975 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca\": container with ID starting with be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca not found: ID does not exist" containerID="be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.938013 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca"} err="failed to get container status \"be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca\": rpc error: code = NotFound desc = could not find container \"be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca\": container with ID starting with be3eeb356bba6f518328a4cc167c8143e109fa190002f9023501c046f678dcca not found: ID does not exist" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.938041 4805 scope.go:117] "RemoveContainer" containerID="10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627" Feb 16 20:59:51 crc kubenswrapper[4805]: E0216 20:59:51.938345 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627\": container with ID starting with 10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627 not found: ID does not exist" containerID="10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.938365 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627"} err="failed to get container status \"10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627\": rpc error: code = NotFound desc = could not find container \"10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627\": container with ID starting with 10270f2c81ebf5d5575459c001b9460924d5900f99afe55759517a2e96506627 not found: ID does not exist" Feb 16 20:59:51 crc kubenswrapper[4805]: I0216 20:59:51.940141 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-878vf"] Feb 16 20:59:52 crc kubenswrapper[4805]: I0216 20:59:52.042563 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.611880 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" path="/var/lib/kubelet/pods/8509d47c-b1fc-473a-9252-6c50c7a630b7/volumes" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.692490 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-68cb54d767-7jq5z"] Feb 16 20:59:53 crc kubenswrapper[4805]: E0216 20:59:53.692814 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerName="registry-server" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.692845 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerName="registry-server" Feb 16 20:59:53 crc kubenswrapper[4805]: E0216 20:59:53.692874 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerName="extract-content" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.692887 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerName="extract-content" Feb 16 20:59:53 crc kubenswrapper[4805]: E0216 20:59:53.692933 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerName="extract-utilities" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.692946 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerName="extract-utilities" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.693098 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509d47c-b1fc-473a-9252-6c50c7a630b7" containerName="registry-server" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.693676 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.696200 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.698196 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.699584 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.699820 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.700010 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.702044 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.702303 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.702487 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.703465 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.703618 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.705706 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.705971 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.707287 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.707902 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68cb54d767-7jq5z"] Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.713382 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.727688 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.856979 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-session\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857058 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857092 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857119 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857147 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-audit-policies\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857290 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857338 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k5r4\" (UniqueName: \"kubernetes.io/projected/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-kube-api-access-7k5r4\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857386 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-template-error\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857409 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857431 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-audit-dir\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857544 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.857564 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-template-login\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.958637 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.958749 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k5r4\" (UniqueName: \"kubernetes.io/projected/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-kube-api-access-7k5r4\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.958849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-template-error\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.958900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.958951 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.959061 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-audit-dir\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.959115 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.959210 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.959301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-template-login\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.959318 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-audit-dir\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.959390 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-session\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.959523 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.959642 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.959713 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.960014 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-audit-policies\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.960495 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.960491 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.964995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.964220 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-audit-policies\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.965289 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.965598 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.966143 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.967229 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-session\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.967310 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-template-login\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.967612 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.967874 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.972010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-v4-0-config-user-template-error\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:53 crc kubenswrapper[4805]: I0216 20:59:53.977101 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k5r4\" (UniqueName: \"kubernetes.io/projected/fa45d9be-166e-450a-9f97-4c1ca6e9a1a8-kube-api-access-7k5r4\") pod \"oauth-openshift-68cb54d767-7jq5z\" (UID: \"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8\") " pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:54 crc kubenswrapper[4805]: I0216 20:59:54.020231 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:54 crc kubenswrapper[4805]: I0216 20:59:54.485519 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68cb54d767-7jq5z"] Feb 16 20:59:54 crc kubenswrapper[4805]: W0216 20:59:54.492140 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa45d9be_166e_450a_9f97_4c1ca6e9a1a8.slice/crio-728b633beec5cb862202c64729d61fcbb875227f4950f2cd338ee35876965430 WatchSource:0}: Error finding container 728b633beec5cb862202c64729d61fcbb875227f4950f2cd338ee35876965430: Status 404 returned error can't find the container with id 728b633beec5cb862202c64729d61fcbb875227f4950f2cd338ee35876965430 Feb 16 20:59:54 crc kubenswrapper[4805]: I0216 20:59:54.904047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" event={"ID":"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8","Type":"ContainerStarted","Data":"bd77cf50074ee6ad3c63399d19994a41b76f165861c56ac0437e5ec58c7db6db"} Feb 16 20:59:54 crc kubenswrapper[4805]: I0216 20:59:54.904090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" event={"ID":"fa45d9be-166e-450a-9f97-4c1ca6e9a1a8","Type":"ContainerStarted","Data":"728b633beec5cb862202c64729d61fcbb875227f4950f2cd338ee35876965430"} Feb 16 20:59:54 crc kubenswrapper[4805]: I0216 20:59:54.904658 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 20:59:54 crc kubenswrapper[4805]: I0216 20:59:54.923897 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" podStartSLOduration=30.923876008 podStartE2EDuration="30.923876008s" podCreationTimestamp="2026-02-16 20:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:59:54.920797784 +0000 UTC m=+212.739481079" watchObservedRunningTime="2026-02-16 20:59:54.923876008 +0000 UTC m=+212.742559303" Feb 16 20:59:55 crc kubenswrapper[4805]: I0216 20:59:55.097853 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68cb54d767-7jq5z" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.158940 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92"] Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.160819 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.164850 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.165273 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.170305 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92"] Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.347974 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce801e4-7f25-47d8-8860-e939a652ed28-config-volume\") pod \"collect-profiles-29521260-4cf92\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.348071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdf8w\" (UniqueName: \"kubernetes.io/projected/bce801e4-7f25-47d8-8860-e939a652ed28-kube-api-access-jdf8w\") pod \"collect-profiles-29521260-4cf92\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.348115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bce801e4-7f25-47d8-8860-e939a652ed28-secret-volume\") pod \"collect-profiles-29521260-4cf92\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.449114 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bce801e4-7f25-47d8-8860-e939a652ed28-secret-volume\") pod \"collect-profiles-29521260-4cf92\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.449204 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce801e4-7f25-47d8-8860-e939a652ed28-config-volume\") pod \"collect-profiles-29521260-4cf92\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.449261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdf8w\" (UniqueName: \"kubernetes.io/projected/bce801e4-7f25-47d8-8860-e939a652ed28-kube-api-access-jdf8w\") pod \"collect-profiles-29521260-4cf92\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.450229 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce801e4-7f25-47d8-8860-e939a652ed28-config-volume\") pod \"collect-profiles-29521260-4cf92\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.455356 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bce801e4-7f25-47d8-8860-e939a652ed28-secret-volume\") pod \"collect-profiles-29521260-4cf92\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.468821 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdf8w\" (UniqueName: \"kubernetes.io/projected/bce801e4-7f25-47d8-8860-e939a652ed28-kube-api-access-jdf8w\") pod \"collect-profiles-29521260-4cf92\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:00 crc kubenswrapper[4805]: I0216 21:00:00.536495 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:01 crc kubenswrapper[4805]: I0216 21:00:01.029456 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92"] Feb 16 21:00:01 crc kubenswrapper[4805]: I0216 21:00:01.968358 4805 generic.go:334] "Generic (PLEG): container finished" podID="bce801e4-7f25-47d8-8860-e939a652ed28" containerID="3f3f32f577e09b73acc30dcc1a6b8beb3b86c342fafcdf32bbfd6297b2af860d" exitCode=0 Feb 16 21:00:01 crc kubenswrapper[4805]: I0216 21:00:01.968481 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" event={"ID":"bce801e4-7f25-47d8-8860-e939a652ed28","Type":"ContainerDied","Data":"3f3f32f577e09b73acc30dcc1a6b8beb3b86c342fafcdf32bbfd6297b2af860d"} Feb 16 21:00:01 crc kubenswrapper[4805]: I0216 21:00:01.968988 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" event={"ID":"bce801e4-7f25-47d8-8860-e939a652ed28","Type":"ContainerStarted","Data":"21c18b35d9e696ef1675d0e192d18ebcd7300d2a584e9b1ad84f55aaeff92573"} Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.347376 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.400201 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bce801e4-7f25-47d8-8860-e939a652ed28-secret-volume\") pod \"bce801e4-7f25-47d8-8860-e939a652ed28\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.400282 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdf8w\" (UniqueName: \"kubernetes.io/projected/bce801e4-7f25-47d8-8860-e939a652ed28-kube-api-access-jdf8w\") pod \"bce801e4-7f25-47d8-8860-e939a652ed28\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.400370 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce801e4-7f25-47d8-8860-e939a652ed28-config-volume\") pod \"bce801e4-7f25-47d8-8860-e939a652ed28\" (UID: \"bce801e4-7f25-47d8-8860-e939a652ed28\") " Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.401114 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bce801e4-7f25-47d8-8860-e939a652ed28-config-volume" (OuterVolumeSpecName: "config-volume") pod "bce801e4-7f25-47d8-8860-e939a652ed28" (UID: "bce801e4-7f25-47d8-8860-e939a652ed28"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.405641 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce801e4-7f25-47d8-8860-e939a652ed28-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bce801e4-7f25-47d8-8860-e939a652ed28" (UID: "bce801e4-7f25-47d8-8860-e939a652ed28"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.405874 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce801e4-7f25-47d8-8860-e939a652ed28-kube-api-access-jdf8w" (OuterVolumeSpecName: "kube-api-access-jdf8w") pod "bce801e4-7f25-47d8-8860-e939a652ed28" (UID: "bce801e4-7f25-47d8-8860-e939a652ed28"). InnerVolumeSpecName "kube-api-access-jdf8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.501203 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce801e4-7f25-47d8-8860-e939a652ed28-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.501605 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bce801e4-7f25-47d8-8860-e939a652ed28-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.501617 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdf8w\" (UniqueName: \"kubernetes.io/projected/bce801e4-7f25-47d8-8860-e939a652ed28-kube-api-access-jdf8w\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.980646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" event={"ID":"bce801e4-7f25-47d8-8860-e939a652ed28","Type":"ContainerDied","Data":"21c18b35d9e696ef1675d0e192d18ebcd7300d2a584e9b1ad84f55aaeff92573"} Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.980688 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21c18b35d9e696ef1675d0e192d18ebcd7300d2a584e9b1ad84f55aaeff92573" Feb 16 21:00:03 crc kubenswrapper[4805]: I0216 21:00:03.980700 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92" Feb 16 21:00:08 crc kubenswrapper[4805]: I0216 21:00:08.100501 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:00:08 crc kubenswrapper[4805]: I0216 21:00:08.101012 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:00:08 crc kubenswrapper[4805]: I0216 21:00:08.101060 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:00:08 crc kubenswrapper[4805]: I0216 21:00:08.101596 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:00:08 crc kubenswrapper[4805]: I0216 21:00:08.101646 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794" gracePeriod=600 Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.014108 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794" exitCode=0 Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.014203 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794"} Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.014786 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"5d5aa7da8c088ddcac44286336170e6647dae110a6c4f871ef29f7ab0795c9ec"} Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.177990 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c4769f759-nz8ts"] Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.179100 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" podUID="49eb0e50-c6ad-4813-84c1-4ae52f3255c8" containerName="controller-manager" containerID="cri-o://8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad" gracePeriod=30 Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.274262 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk"] Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.274456 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" podUID="7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" containerName="route-controller-manager" containerID="cri-o://2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0" gracePeriod=30 Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.770889 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.795768 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-config\") pod \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.795821 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btfxv\" (UniqueName: \"kubernetes.io/projected/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-kube-api-access-btfxv\") pod \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.795853 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-client-ca\") pod \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.795875 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-serving-cert\") pod \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\" (UID: \"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2\") " Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.796682 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-client-ca" (OuterVolumeSpecName: "client-ca") pod "7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" (UID: "7fe50339-ce37-4e95-a7d3-cbb42f07e5b2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.796826 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-config" (OuterVolumeSpecName: "config") pod "7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" (UID: "7fe50339-ce37-4e95-a7d3-cbb42f07e5b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.803148 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-kube-api-access-btfxv" (OuterVolumeSpecName: "kube-api-access-btfxv") pod "7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" (UID: "7fe50339-ce37-4e95-a7d3-cbb42f07e5b2"). InnerVolumeSpecName "kube-api-access-btfxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.803197 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" (UID: "7fe50339-ce37-4e95-a7d3-cbb42f07e5b2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.827786 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.896705 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.896751 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btfxv\" (UniqueName: \"kubernetes.io/projected/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-kube-api-access-btfxv\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.896762 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.896771 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.997458 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-proxy-ca-bundles\") pod \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.997502 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-serving-cert\") pod \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.997526 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7nqf\" (UniqueName: \"kubernetes.io/projected/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-kube-api-access-r7nqf\") pod \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.997569 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-config\") pod \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.997614 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-client-ca\") pod \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\" (UID: \"49eb0e50-c6ad-4813-84c1-4ae52f3255c8\") " Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.998488 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "49eb0e50-c6ad-4813-84c1-4ae52f3255c8" (UID: "49eb0e50-c6ad-4813-84c1-4ae52f3255c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.998537 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "49eb0e50-c6ad-4813-84c1-4ae52f3255c8" (UID: "49eb0e50-c6ad-4813-84c1-4ae52f3255c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:00:09 crc kubenswrapper[4805]: I0216 21:00:09.999356 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-config" (OuterVolumeSpecName: "config") pod "49eb0e50-c6ad-4813-84c1-4ae52f3255c8" (UID: "49eb0e50-c6ad-4813-84c1-4ae52f3255c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.000762 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "49eb0e50-c6ad-4813-84c1-4ae52f3255c8" (UID: "49eb0e50-c6ad-4813-84c1-4ae52f3255c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.002245 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-kube-api-access-r7nqf" (OuterVolumeSpecName: "kube-api-access-r7nqf") pod "49eb0e50-c6ad-4813-84c1-4ae52f3255c8" (UID: "49eb0e50-c6ad-4813-84c1-4ae52f3255c8"). InnerVolumeSpecName "kube-api-access-r7nqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.022662 4805 generic.go:334] "Generic (PLEG): container finished" podID="49eb0e50-c6ad-4813-84c1-4ae52f3255c8" containerID="8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad" exitCode=0 Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.022736 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" event={"ID":"49eb0e50-c6ad-4813-84c1-4ae52f3255c8","Type":"ContainerDied","Data":"8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad"} Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.022772 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" event={"ID":"49eb0e50-c6ad-4813-84c1-4ae52f3255c8","Type":"ContainerDied","Data":"a986bf941226ecaa9079e67369c5b0566a74260380cbc478eb68a7dbbcc2ed5f"} Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.022791 4805 scope.go:117] "RemoveContainer" containerID="8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.022850 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c4769f759-nz8ts" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.026875 4805 generic.go:334] "Generic (PLEG): container finished" podID="7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" containerID="2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0" exitCode=0 Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.026970 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" event={"ID":"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2","Type":"ContainerDied","Data":"2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0"} Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.027026 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" event={"ID":"7fe50339-ce37-4e95-a7d3-cbb42f07e5b2","Type":"ContainerDied","Data":"7a4a8e9b8f4f108913ec2d89bd490338dfb3435a7172bcd96a20ecd33c261cf2"} Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.027134 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.049320 4805 scope.go:117] "RemoveContainer" containerID="8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad" Feb 16 21:00:10 crc kubenswrapper[4805]: E0216 21:00:10.049918 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad\": container with ID starting with 8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad not found: ID does not exist" containerID="8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.049963 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad"} err="failed to get container status \"8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad\": rpc error: code = NotFound desc = could not find container \"8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad\": container with ID starting with 8b5769894e0ad5424fb518d9b984ddd69beb1c7523599d15fbb4c5f242023cad not found: ID does not exist" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.049990 4805 scope.go:117] "RemoveContainer" containerID="2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.067908 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c4769f759-nz8ts"] Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.072228 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c4769f759-nz8ts"] Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.081023 4805 scope.go:117] "RemoveContainer" containerID="2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.089141 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk"] Feb 16 21:00:10 crc kubenswrapper[4805]: E0216 21:00:10.089136 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0\": container with ID starting with 2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0 not found: ID does not exist" containerID="2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.089834 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0"} err="failed to get container status \"2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0\": rpc error: code = NotFound desc = could not find container \"2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0\": container with ID starting with 2f5690b0942a1f0bf0d7f04b1b1711756e0422bdef1dabfffdfd606af3fdb6a0 not found: ID does not exist" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.093412 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b8b8c6fc-85dgk"] Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.098904 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.098943 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.098959 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.098978 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.098993 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7nqf\" (UniqueName: \"kubernetes.io/projected/49eb0e50-c6ad-4813-84c1-4ae52f3255c8-kube-api-access-r7nqf\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.710405 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d5f8c5dff-25mz2"] Feb 16 21:00:10 crc kubenswrapper[4805]: E0216 21:00:10.711270 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49eb0e50-c6ad-4813-84c1-4ae52f3255c8" containerName="controller-manager" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.711289 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="49eb0e50-c6ad-4813-84c1-4ae52f3255c8" containerName="controller-manager" Feb 16 21:00:10 crc kubenswrapper[4805]: E0216 21:00:10.711308 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" containerName="route-controller-manager" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.711340 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" containerName="route-controller-manager" Feb 16 21:00:10 crc kubenswrapper[4805]: E0216 21:00:10.711351 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce801e4-7f25-47d8-8860-e939a652ed28" containerName="collect-profiles" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.711360 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce801e4-7f25-47d8-8860-e939a652ed28" containerName="collect-profiles" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.711541 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce801e4-7f25-47d8-8860-e939a652ed28" containerName="collect-profiles" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.711581 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" containerName="route-controller-manager" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.711591 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="49eb0e50-c6ad-4813-84c1-4ae52f3255c8" containerName="controller-manager" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.712180 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.713956 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb"] Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.714547 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.716458 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.717088 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.718028 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.718657 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.719746 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.719977 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.720133 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.722225 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.722531 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.722705 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.723356 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.723522 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.727335 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.727519 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb"] Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.736529 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d5f8c5dff-25mz2"] Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.809173 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47wj\" (UniqueName: \"kubernetes.io/projected/782ce1ae-f234-472a-9a91-b832e0eea6b9-kube-api-access-z47wj\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.809220 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/782ce1ae-f234-472a-9a91-b832e0eea6b9-client-ca\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.809237 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/782ce1ae-f234-472a-9a91-b832e0eea6b9-serving-cert\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.809255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ca0a142-9c2b-4506-b371-793f572be1f3-serving-cert\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.809279 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98z6z\" (UniqueName: \"kubernetes.io/projected/0ca0a142-9c2b-4506-b371-793f572be1f3-kube-api-access-98z6z\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.809295 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782ce1ae-f234-472a-9a91-b832e0eea6b9-config\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.809314 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ca0a142-9c2b-4506-b371-793f572be1f3-config\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.809344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/782ce1ae-f234-472a-9a91-b832e0eea6b9-proxy-ca-bundles\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.809359 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ca0a142-9c2b-4506-b371-793f572be1f3-client-ca\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.910345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ca0a142-9c2b-4506-b371-793f572be1f3-serving-cert\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.910678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98z6z\" (UniqueName: \"kubernetes.io/projected/0ca0a142-9c2b-4506-b371-793f572be1f3-kube-api-access-98z6z\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.910827 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782ce1ae-f234-472a-9a91-b832e0eea6b9-config\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.910934 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ca0a142-9c2b-4506-b371-793f572be1f3-config\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.911056 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/782ce1ae-f234-472a-9a91-b832e0eea6b9-proxy-ca-bundles\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.911157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ca0a142-9c2b-4506-b371-793f572be1f3-client-ca\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.911290 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z47wj\" (UniqueName: \"kubernetes.io/projected/782ce1ae-f234-472a-9a91-b832e0eea6b9-kube-api-access-z47wj\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.911385 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/782ce1ae-f234-472a-9a91-b832e0eea6b9-client-ca\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.911475 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/782ce1ae-f234-472a-9a91-b832e0eea6b9-serving-cert\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.913120 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ca0a142-9c2b-4506-b371-793f572be1f3-config\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.913257 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ca0a142-9c2b-4506-b371-793f572be1f3-client-ca\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.913372 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/782ce1ae-f234-472a-9a91-b832e0eea6b9-client-ca\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.913555 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/782ce1ae-f234-472a-9a91-b832e0eea6b9-proxy-ca-bundles\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.914077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782ce1ae-f234-472a-9a91-b832e0eea6b9-config\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.916350 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ca0a142-9c2b-4506-b371-793f572be1f3-serving-cert\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.916394 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/782ce1ae-f234-472a-9a91-b832e0eea6b9-serving-cert\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.940310 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98z6z\" (UniqueName: \"kubernetes.io/projected/0ca0a142-9c2b-4506-b371-793f572be1f3-kube-api-access-98z6z\") pod \"route-controller-manager-6d8b445bdf-hlnnb\" (UID: \"0ca0a142-9c2b-4506-b371-793f572be1f3\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:10 crc kubenswrapper[4805]: I0216 21:00:10.944122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z47wj\" (UniqueName: \"kubernetes.io/projected/782ce1ae-f234-472a-9a91-b832e0eea6b9-kube-api-access-z47wj\") pod \"controller-manager-d5f8c5dff-25mz2\" (UID: \"782ce1ae-f234-472a-9a91-b832e0eea6b9\") " pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:11 crc kubenswrapper[4805]: I0216 21:00:11.030649 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:11 crc kubenswrapper[4805]: I0216 21:00:11.042743 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:11 crc kubenswrapper[4805]: I0216 21:00:11.345902 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d5f8c5dff-25mz2"] Feb 16 21:00:11 crc kubenswrapper[4805]: I0216 21:00:11.482546 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb"] Feb 16 21:00:11 crc kubenswrapper[4805]: W0216 21:00:11.492147 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca0a142_9c2b_4506_b371_793f572be1f3.slice/crio-d650cbde81bd23d354546aaa24b712738c06ab7a64689313e0612cee0c8e980f WatchSource:0}: Error finding container d650cbde81bd23d354546aaa24b712738c06ab7a64689313e0612cee0c8e980f: Status 404 returned error can't find the container with id d650cbde81bd23d354546aaa24b712738c06ab7a64689313e0612cee0c8e980f Feb 16 21:00:11 crc kubenswrapper[4805]: I0216 21:00:11.603676 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49eb0e50-c6ad-4813-84c1-4ae52f3255c8" path="/var/lib/kubelet/pods/49eb0e50-c6ad-4813-84c1-4ae52f3255c8/volumes" Feb 16 21:00:11 crc kubenswrapper[4805]: I0216 21:00:11.604986 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe50339-ce37-4e95-a7d3-cbb42f07e5b2" path="/var/lib/kubelet/pods/7fe50339-ce37-4e95-a7d3-cbb42f07e5b2/volumes" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.041968 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" event={"ID":"0ca0a142-9c2b-4506-b371-793f572be1f3","Type":"ContainerStarted","Data":"3addd5a7daac2aeecb47b4d628ea90699da9e84a31b595eea2848788ae786619"} Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.042030 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" event={"ID":"0ca0a142-9c2b-4506-b371-793f572be1f3","Type":"ContainerStarted","Data":"d650cbde81bd23d354546aaa24b712738c06ab7a64689313e0612cee0c8e980f"} Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.042197 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.043868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" event={"ID":"782ce1ae-f234-472a-9a91-b832e0eea6b9","Type":"ContainerStarted","Data":"5b64a76929f764884283506875dcafc20a4b6385cc252dc0c76abc289f5a6c60"} Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.043893 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" event={"ID":"782ce1ae-f234-472a-9a91-b832e0eea6b9","Type":"ContainerStarted","Data":"196e32527c8dbfa760b8cc8b7d077bcf02417f58110910a49d73dc80d8b452d7"} Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.044271 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.051109 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.060053 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" podStartSLOduration=3.060030003 podStartE2EDuration="3.060030003s" podCreationTimestamp="2026-02-16 21:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:00:12.05695348 +0000 UTC m=+229.875636775" watchObservedRunningTime="2026-02-16 21:00:12.060030003 +0000 UTC m=+229.878713308" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.075398 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d5f8c5dff-25mz2" podStartSLOduration=3.0753771 podStartE2EDuration="3.0753771s" podCreationTimestamp="2026-02-16 21:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:00:12.071506185 +0000 UTC m=+229.890189480" watchObservedRunningTime="2026-02-16 21:00:12.0753771 +0000 UTC m=+229.894060405" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.269326 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ql9vs"] Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.269903 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ql9vs" podUID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerName="registry-server" containerID="cri-o://79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.281135 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnj78"] Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.281353 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bnj78" podUID="2acb9625-6b32-480b-9f3d-97976930c437" containerName="registry-server" containerID="cri-o://8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.292705 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hkjs5"] Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.293004 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" podUID="a9ac0f09-69ad-444c-b827-cbb26c8623fb" containerName="marketplace-operator" containerID="cri-o://5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.307409 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrj84"] Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.307687 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hrj84" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerName="registry-server" containerID="cri-o://bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.314768 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6cb4l"] Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.315450 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.321051 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d8b445bdf-hlnnb" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.322356 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mw567"] Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.322566 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mw567" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerName="registry-server" containerID="cri-o://1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8" gracePeriod=30 Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.333120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14dd9df6-740d-4d6b-90cc-f62d0cb76f4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6cb4l\" (UID: \"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.333169 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14dd9df6-740d-4d6b-90cc-f62d0cb76f4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6cb4l\" (UID: \"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.333220 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqn5g\" (UniqueName: \"kubernetes.io/projected/14dd9df6-740d-4d6b-90cc-f62d0cb76f4d-kube-api-access-pqn5g\") pod \"marketplace-operator-79b997595-6cb4l\" (UID: \"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.373342 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6cb4l"] Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.434823 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14dd9df6-740d-4d6b-90cc-f62d0cb76f4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6cb4l\" (UID: \"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.434876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14dd9df6-740d-4d6b-90cc-f62d0cb76f4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6cb4l\" (UID: \"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.434928 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqn5g\" (UniqueName: \"kubernetes.io/projected/14dd9df6-740d-4d6b-90cc-f62d0cb76f4d-kube-api-access-pqn5g\") pod \"marketplace-operator-79b997595-6cb4l\" (UID: \"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.436567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14dd9df6-740d-4d6b-90cc-f62d0cb76f4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6cb4l\" (UID: \"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.443617 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14dd9df6-740d-4d6b-90cc-f62d0cb76f4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6cb4l\" (UID: \"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.461494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqn5g\" (UniqueName: \"kubernetes.io/projected/14dd9df6-740d-4d6b-90cc-f62d0cb76f4d-kube-api-access-pqn5g\") pod \"marketplace-operator-79b997595-6cb4l\" (UID: \"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.680820 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.751178 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnj78" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.841937 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl9hq\" (UniqueName: \"kubernetes.io/projected/2acb9625-6b32-480b-9f3d-97976930c437-kube-api-access-hl9hq\") pod \"2acb9625-6b32-480b-9f3d-97976930c437\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.842259 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-catalog-content\") pod \"2acb9625-6b32-480b-9f3d-97976930c437\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.842286 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-utilities\") pod \"2acb9625-6b32-480b-9f3d-97976930c437\" (UID: \"2acb9625-6b32-480b-9f3d-97976930c437\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.843904 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-utilities" (OuterVolumeSpecName: "utilities") pod "2acb9625-6b32-480b-9f3d-97976930c437" (UID: "2acb9625-6b32-480b-9f3d-97976930c437"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.847704 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2acb9625-6b32-480b-9f3d-97976930c437-kube-api-access-hl9hq" (OuterVolumeSpecName: "kube-api-access-hl9hq") pod "2acb9625-6b32-480b-9f3d-97976930c437" (UID: "2acb9625-6b32-480b-9f3d-97976930c437"). InnerVolumeSpecName "kube-api-access-hl9hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.851789 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.861816 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.902941 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.906941 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw567" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.925422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2acb9625-6b32-480b-9f3d-97976930c437" (UID: "2acb9625-6b32-480b-9f3d-97976930c437"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943273 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-utilities\") pod \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943328 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-operator-metrics\") pod \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943352 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-trusted-ca\") pod \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943387 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-catalog-content\") pod \"2dccaada-bb80-4a57-b9f2-5b190830fc87\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943418 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-catalog-content\") pod \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943458 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6hwh\" (UniqueName: \"kubernetes.io/projected/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-kube-api-access-g6hwh\") pod \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943473 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwkzw\" (UniqueName: \"kubernetes.io/projected/e1a06996-a3de-413f-b05e-852d5c0fa7ff-kube-api-access-jwkzw\") pod \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943489 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-utilities\") pod \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\" (UID: \"e1a06996-a3de-413f-b05e-852d5c0fa7ff\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943544 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v48l7\" (UniqueName: \"kubernetes.io/projected/2dccaada-bb80-4a57-b9f2-5b190830fc87-kube-api-access-v48l7\") pod \"2dccaada-bb80-4a57-b9f2-5b190830fc87\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943580 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-catalog-content\") pod \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\" (UID: \"6bba21b2-c506-44e1-87e9-9ef5067ff1e5\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943594 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7jqh\" (UniqueName: \"kubernetes.io/projected/a9ac0f09-69ad-444c-b827-cbb26c8623fb-kube-api-access-m7jqh\") pod \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\" (UID: \"a9ac0f09-69ad-444c-b827-cbb26c8623fb\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943615 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-utilities\") pod \"2dccaada-bb80-4a57-b9f2-5b190830fc87\" (UID: \"2dccaada-bb80-4a57-b9f2-5b190830fc87\") " Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943876 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl9hq\" (UniqueName: \"kubernetes.io/projected/2acb9625-6b32-480b-9f3d-97976930c437-kube-api-access-hl9hq\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943889 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.943898 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2acb9625-6b32-480b-9f3d-97976930c437-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.944380 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-utilities" (OuterVolumeSpecName: "utilities") pod "6bba21b2-c506-44e1-87e9-9ef5067ff1e5" (UID: "6bba21b2-c506-44e1-87e9-9ef5067ff1e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.944656 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-utilities" (OuterVolumeSpecName: "utilities") pod "2dccaada-bb80-4a57-b9f2-5b190830fc87" (UID: "2dccaada-bb80-4a57-b9f2-5b190830fc87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.946876 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a9ac0f09-69ad-444c-b827-cbb26c8623fb" (UID: "a9ac0f09-69ad-444c-b827-cbb26c8623fb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.946911 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-utilities" (OuterVolumeSpecName: "utilities") pod "e1a06996-a3de-413f-b05e-852d5c0fa7ff" (UID: "e1a06996-a3de-413f-b05e-852d5c0fa7ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.947246 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a9ac0f09-69ad-444c-b827-cbb26c8623fb" (UID: "a9ac0f09-69ad-444c-b827-cbb26c8623fb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.948018 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ac0f09-69ad-444c-b827-cbb26c8623fb-kube-api-access-m7jqh" (OuterVolumeSpecName: "kube-api-access-m7jqh") pod "a9ac0f09-69ad-444c-b827-cbb26c8623fb" (UID: "a9ac0f09-69ad-444c-b827-cbb26c8623fb"). InnerVolumeSpecName "kube-api-access-m7jqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.948491 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dccaada-bb80-4a57-b9f2-5b190830fc87-kube-api-access-v48l7" (OuterVolumeSpecName: "kube-api-access-v48l7") pod "2dccaada-bb80-4a57-b9f2-5b190830fc87" (UID: "2dccaada-bb80-4a57-b9f2-5b190830fc87"). InnerVolumeSpecName "kube-api-access-v48l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.957732 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a06996-a3de-413f-b05e-852d5c0fa7ff-kube-api-access-jwkzw" (OuterVolumeSpecName: "kube-api-access-jwkzw") pod "e1a06996-a3de-413f-b05e-852d5c0fa7ff" (UID: "e1a06996-a3de-413f-b05e-852d5c0fa7ff"). InnerVolumeSpecName "kube-api-access-jwkzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.965451 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-kube-api-access-g6hwh" (OuterVolumeSpecName: "kube-api-access-g6hwh") pod "6bba21b2-c506-44e1-87e9-9ef5067ff1e5" (UID: "6bba21b2-c506-44e1-87e9-9ef5067ff1e5"). InnerVolumeSpecName "kube-api-access-g6hwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:12 crc kubenswrapper[4805]: I0216 21:00:12.993577 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bba21b2-c506-44e1-87e9-9ef5067ff1e5" (UID: "6bba21b2-c506-44e1-87e9-9ef5067ff1e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.014521 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dccaada-bb80-4a57-b9f2-5b190830fc87" (UID: "2dccaada-bb80-4a57-b9f2-5b190830fc87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045218 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6hwh\" (UniqueName: \"kubernetes.io/projected/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-kube-api-access-g6hwh\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045273 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwkzw\" (UniqueName: \"kubernetes.io/projected/e1a06996-a3de-413f-b05e-852d5c0fa7ff-kube-api-access-jwkzw\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045283 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045295 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v48l7\" (UniqueName: \"kubernetes.io/projected/2dccaada-bb80-4a57-b9f2-5b190830fc87-kube-api-access-v48l7\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045305 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045313 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7jqh\" (UniqueName: \"kubernetes.io/projected/a9ac0f09-69ad-444c-b827-cbb26c8623fb-kube-api-access-m7jqh\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045320 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045330 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bba21b2-c506-44e1-87e9-9ef5067ff1e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045338 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045347 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9ac0f09-69ad-444c-b827-cbb26c8623fb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.045354 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dccaada-bb80-4a57-b9f2-5b190830fc87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.064440 4805 generic.go:334] "Generic (PLEG): container finished" podID="a9ac0f09-69ad-444c-b827-cbb26c8623fb" containerID="5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.064543 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.064616 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" event={"ID":"a9ac0f09-69ad-444c-b827-cbb26c8623fb","Type":"ContainerDied","Data":"5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.064749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hkjs5" event={"ID":"a9ac0f09-69ad-444c-b827-cbb26c8623fb","Type":"ContainerDied","Data":"931a4c5a61e00b78b44739d950781052ae7b2b3fcceb99fc462fc75dde66f984"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.064798 4805 scope.go:117] "RemoveContainer" containerID="5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.070543 4805 generic.go:334] "Generic (PLEG): container finished" podID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerID="1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.070614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw567" event={"ID":"e1a06996-a3de-413f-b05e-852d5c0fa7ff","Type":"ContainerDied","Data":"1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.070639 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw567" event={"ID":"e1a06996-a3de-413f-b05e-852d5c0fa7ff","Type":"ContainerDied","Data":"909eeabe438281a1189410aa1d1ae522be90680c773468b5ba862a935b1c7a1d"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.070888 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw567" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.077211 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrj84" event={"ID":"6bba21b2-c506-44e1-87e9-9ef5067ff1e5","Type":"ContainerDied","Data":"bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.077293 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrj84" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.077168 4805 generic.go:334] "Generic (PLEG): container finished" podID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerID="bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.077779 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrj84" event={"ID":"6bba21b2-c506-44e1-87e9-9ef5067ff1e5","Type":"ContainerDied","Data":"223c03e00b686a9bed1db31428c7623ff512bab050c77fabdf2e9afad4bd2067"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.081619 4805 generic.go:334] "Generic (PLEG): container finished" podID="2acb9625-6b32-480b-9f3d-97976930c437" containerID="8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.081710 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnj78" event={"ID":"2acb9625-6b32-480b-9f3d-97976930c437","Type":"ContainerDied","Data":"8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.081786 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnj78" event={"ID":"2acb9625-6b32-480b-9f3d-97976930c437","Type":"ContainerDied","Data":"c16b0057acf5626b78e0f9c42cb3cd255cea3d51ae964742cea78ef67eaec5ec"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.081849 4805 scope.go:117] "RemoveContainer" containerID="5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.081858 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnj78" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.082250 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8\": container with ID starting with 5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8 not found: ID does not exist" containerID="5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.082281 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8"} err="failed to get container status \"5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8\": rpc error: code = NotFound desc = could not find container \"5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8\": container with ID starting with 5604eddd5c86e08ced27aab0894e232bfa1a0b95fcaf9eb147a2e6ea895170e8 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.082339 4805 scope.go:117] "RemoveContainer" containerID="1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.087316 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerID="79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6" exitCode=0 Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.087396 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql9vs" event={"ID":"2dccaada-bb80-4a57-b9f2-5b190830fc87","Type":"ContainerDied","Data":"79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.087440 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql9vs" event={"ID":"2dccaada-bb80-4a57-b9f2-5b190830fc87","Type":"ContainerDied","Data":"ef68ef12c1478e6050aeb0eeb8a69f6c8f626fe23e1b96790a54b584dcea9f72"} Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.087483 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ql9vs" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.103183 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hkjs5"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.115521 4805 scope.go:117] "RemoveContainer" containerID="05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.114871 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hkjs5"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.120626 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrj84"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.121160 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1a06996-a3de-413f-b05e-852d5c0fa7ff" (UID: "e1a06996-a3de-413f-b05e-852d5c0fa7ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.126112 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrj84"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.130453 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnj78"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.138692 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bnj78"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.144680 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ql9vs"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.146457 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a06996-a3de-413f-b05e-852d5c0fa7ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.148338 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ql9vs"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.151663 4805 scope.go:117] "RemoveContainer" containerID="7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.175436 4805 scope.go:117] "RemoveContainer" containerID="1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.181791 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8\": container with ID starting with 1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8 not found: ID does not exist" containerID="1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.181859 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8"} err="failed to get container status \"1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8\": rpc error: code = NotFound desc = could not find container \"1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8\": container with ID starting with 1fe6dce20deeb47930f30fe0591b2e1376a73943cd8e9ea313a16dde743c44c8 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.181895 4805 scope.go:117] "RemoveContainer" containerID="05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.182432 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f\": container with ID starting with 05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f not found: ID does not exist" containerID="05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.182467 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f"} err="failed to get container status \"05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f\": rpc error: code = NotFound desc = could not find container \"05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f\": container with ID starting with 05ff83845bc67caca88c38babd74bcfc2da6adfa0d622d21e26c3ef6ed42259f not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.182491 4805 scope.go:117] "RemoveContainer" containerID="7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.183011 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e\": container with ID starting with 7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e not found: ID does not exist" containerID="7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.183041 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e"} err="failed to get container status \"7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e\": rpc error: code = NotFound desc = could not find container \"7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e\": container with ID starting with 7e9a119c368e9bcfa2c441fa03fbe5c4044311d00b9bdc1d0d4c54f9dc18e91e not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.183060 4805 scope.go:117] "RemoveContainer" containerID="bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.205156 4805 scope.go:117] "RemoveContainer" containerID="af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.227909 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6cb4l"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.242022 4805 scope.go:117] "RemoveContainer" containerID="c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6" Feb 16 21:00:13 crc kubenswrapper[4805]: W0216 21:00:13.244154 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14dd9df6_740d_4d6b_90cc_f62d0cb76f4d.slice/crio-c34d0296455f6ee8c9bf1261a0c8308cc98841ac799f7da17f536c770897f1a5 WatchSource:0}: Error finding container c34d0296455f6ee8c9bf1261a0c8308cc98841ac799f7da17f536c770897f1a5: Status 404 returned error can't find the container with id c34d0296455f6ee8c9bf1261a0c8308cc98841ac799f7da17f536c770897f1a5 Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.254867 4805 scope.go:117] "RemoveContainer" containerID="bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.255542 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec\": container with ID starting with bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec not found: ID does not exist" containerID="bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.255578 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec"} err="failed to get container status \"bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec\": rpc error: code = NotFound desc = could not find container \"bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec\": container with ID starting with bf15b5963be5c45be2107595eab7e4af2dd087d83e39036e9d1930ec1ff78fec not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.255617 4805 scope.go:117] "RemoveContainer" containerID="af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.256400 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a\": container with ID starting with af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a not found: ID does not exist" containerID="af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.256452 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a"} err="failed to get container status \"af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a\": rpc error: code = NotFound desc = could not find container \"af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a\": container with ID starting with af76306cfd962a79cd5349fd7148a2a4b171bcd21519bb9ec4cee3c29d3bf97a not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.256483 4805 scope.go:117] "RemoveContainer" containerID="c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.256825 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6\": container with ID starting with c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6 not found: ID does not exist" containerID="c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.256860 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6"} err="failed to get container status \"c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6\": rpc error: code = NotFound desc = could not find container \"c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6\": container with ID starting with c7300f43a252eb39931bee78b6ed1542aef31bea77753ad97b9c7b77dfbb01c6 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.256900 4805 scope.go:117] "RemoveContainer" containerID="8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.276013 4805 scope.go:117] "RemoveContainer" containerID="a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.292017 4805 scope.go:117] "RemoveContainer" containerID="4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.310913 4805 scope.go:117] "RemoveContainer" containerID="8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.311307 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55\": container with ID starting with 8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55 not found: ID does not exist" containerID="8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.311349 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55"} err="failed to get container status \"8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55\": rpc error: code = NotFound desc = could not find container \"8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55\": container with ID starting with 8347f36307c3305b2af1b44a7dae62c2c683ee8e2c1ffdd0a3ee85c4a2eb4d55 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.311372 4805 scope.go:117] "RemoveContainer" containerID="a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.311651 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa\": container with ID starting with a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa not found: ID does not exist" containerID="a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.311680 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa"} err="failed to get container status \"a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa\": rpc error: code = NotFound desc = could not find container \"a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa\": container with ID starting with a7734c0a297f3f725bfd7d409a04b801626eaeea36e6665e99f9002080ce57fa not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.311697 4805 scope.go:117] "RemoveContainer" containerID="4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.311997 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478\": container with ID starting with 4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478 not found: ID does not exist" containerID="4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.312024 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478"} err="failed to get container status \"4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478\": rpc error: code = NotFound desc = could not find container \"4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478\": container with ID starting with 4eeda5ea15fed587acbc8584ea82c87d27c96cff039bc27c073b89fe824f7478 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.312037 4805 scope.go:117] "RemoveContainer" containerID="79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.334136 4805 scope.go:117] "RemoveContainer" containerID="54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.359597 4805 scope.go:117] "RemoveContainer" containerID="02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.373344 4805 scope.go:117] "RemoveContainer" containerID="79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.373885 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6\": container with ID starting with 79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6 not found: ID does not exist" containerID="79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.373936 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6"} err="failed to get container status \"79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6\": rpc error: code = NotFound desc = could not find container \"79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6\": container with ID starting with 79cc4d44a084ddb2a0ea351d9725b6a4ea9dbfedf1bd4f391a4237492516ada6 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.373966 4805 scope.go:117] "RemoveContainer" containerID="54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.374216 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b\": container with ID starting with 54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b not found: ID does not exist" containerID="54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.374246 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b"} err="failed to get container status \"54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b\": rpc error: code = NotFound desc = could not find container \"54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b\": container with ID starting with 54a0859f3bf70f8758e3a378fe027a46402808e05765df191ab47e4dffff618b not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.374268 4805 scope.go:117] "RemoveContainer" containerID="02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1" Feb 16 21:00:13 crc kubenswrapper[4805]: E0216 21:00:13.374472 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1\": container with ID starting with 02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1 not found: ID does not exist" containerID="02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.374501 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1"} err="failed to get container status \"02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1\": rpc error: code = NotFound desc = could not find container \"02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1\": container with ID starting with 02a6a38ff2b8c4261e5f2c9b4558cbd177e6343222048d24b9d70f7dc89d74b1 not found: ID does not exist" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.411063 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mw567"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.414832 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mw567"] Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.605215 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2acb9625-6b32-480b-9f3d-97976930c437" path="/var/lib/kubelet/pods/2acb9625-6b32-480b-9f3d-97976930c437/volumes" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.606147 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dccaada-bb80-4a57-b9f2-5b190830fc87" path="/var/lib/kubelet/pods/2dccaada-bb80-4a57-b9f2-5b190830fc87/volumes" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.606969 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" path="/var/lib/kubelet/pods/6bba21b2-c506-44e1-87e9-9ef5067ff1e5/volumes" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.607717 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ac0f09-69ad-444c-b827-cbb26c8623fb" path="/var/lib/kubelet/pods/a9ac0f09-69ad-444c-b827-cbb26c8623fb/volumes" Feb 16 21:00:13 crc kubenswrapper[4805]: I0216 21:00:13.608272 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" path="/var/lib/kubelet/pods/e1a06996-a3de-413f-b05e-852d5c0fa7ff/volumes" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.099261 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" event={"ID":"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d","Type":"ContainerStarted","Data":"6d4c0e8e2976e4f702b2d1b84d20bb3194c8358b6f5bb4c7a50f37199d80975e"} Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.099299 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" event={"ID":"14dd9df6-740d-4d6b-90cc-f62d0cb76f4d","Type":"ContainerStarted","Data":"c34d0296455f6ee8c9bf1261a0c8308cc98841ac799f7da17f536c770897f1a5"} Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.099569 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.102417 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.114631 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6cb4l" podStartSLOduration=2.114612386 podStartE2EDuration="2.114612386s" podCreationTimestamp="2026-02-16 21:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:00:14.111940254 +0000 UTC m=+231.930623549" watchObservedRunningTime="2026-02-16 21:00:14.114612386 +0000 UTC m=+231.933295681" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.484966 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pmfv7"] Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.485413 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acb9625-6b32-480b-9f3d-97976930c437" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.485529 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acb9625-6b32-480b-9f3d-97976930c437" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.485594 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ac0f09-69ad-444c-b827-cbb26c8623fb" containerName="marketplace-operator" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.485670 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ac0f09-69ad-444c-b827-cbb26c8623fb" containerName="marketplace-operator" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.485772 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.485874 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.485956 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.486060 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.486130 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acb9625-6b32-480b-9f3d-97976930c437" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.486209 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acb9625-6b32-480b-9f3d-97976930c437" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.486275 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.486329 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.486383 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.486441 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.486496 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.486566 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.486688 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.486802 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.486947 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acb9625-6b32-480b-9f3d-97976930c437" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.487052 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acb9625-6b32-480b-9f3d-97976930c437" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.487218 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.487362 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.487455 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.487538 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerName="extract-utilities" Feb 16 21:00:14 crc kubenswrapper[4805]: E0216 21:00:14.487616 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.487700 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerName="extract-content" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.487968 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dccaada-bb80-4a57-b9f2-5b190830fc87" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.488091 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bba21b2-c506-44e1-87e9-9ef5067ff1e5" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.488212 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a06996-a3de-413f-b05e-852d5c0fa7ff" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.488349 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9ac0f09-69ad-444c-b827-cbb26c8623fb" containerName="marketplace-operator" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.488450 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2acb9625-6b32-480b-9f3d-97976930c437" containerName="registry-server" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.489585 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.492658 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.496065 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pmfv7"] Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.560493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-utilities\") pod \"certified-operators-pmfv7\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.561037 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-catalog-content\") pod \"certified-operators-pmfv7\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.561377 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-697p2\" (UniqueName: \"kubernetes.io/projected/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-kube-api-access-697p2\") pod \"certified-operators-pmfv7\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.664763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-697p2\" (UniqueName: \"kubernetes.io/projected/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-kube-api-access-697p2\") pod \"certified-operators-pmfv7\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.664880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-utilities\") pod \"certified-operators-pmfv7\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.664901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-catalog-content\") pod \"certified-operators-pmfv7\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.665556 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-utilities\") pod \"certified-operators-pmfv7\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.666141 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-catalog-content\") pod \"certified-operators-pmfv7\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.692496 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-48dqc"] Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.704712 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48dqc"] Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.705204 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.707205 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-697p2\" (UniqueName: \"kubernetes.io/projected/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-kube-api-access-697p2\") pod \"certified-operators-pmfv7\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.709715 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.766197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g29x\" (UniqueName: \"kubernetes.io/projected/b392345c-7432-4562-a35a-5205eea9e26a-kube-api-access-4g29x\") pod \"community-operators-48dqc\" (UID: \"b392345c-7432-4562-a35a-5205eea9e26a\") " pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.766357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b392345c-7432-4562-a35a-5205eea9e26a-utilities\") pod \"community-operators-48dqc\" (UID: \"b392345c-7432-4562-a35a-5205eea9e26a\") " pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.766449 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b392345c-7432-4562-a35a-5205eea9e26a-catalog-content\") pod \"community-operators-48dqc\" (UID: \"b392345c-7432-4562-a35a-5205eea9e26a\") " pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.849674 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.869344 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b392345c-7432-4562-a35a-5205eea9e26a-utilities\") pod \"community-operators-48dqc\" (UID: \"b392345c-7432-4562-a35a-5205eea9e26a\") " pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.869411 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b392345c-7432-4562-a35a-5205eea9e26a-catalog-content\") pod \"community-operators-48dqc\" (UID: \"b392345c-7432-4562-a35a-5205eea9e26a\") " pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.869461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g29x\" (UniqueName: \"kubernetes.io/projected/b392345c-7432-4562-a35a-5205eea9e26a-kube-api-access-4g29x\") pod \"community-operators-48dqc\" (UID: \"b392345c-7432-4562-a35a-5205eea9e26a\") " pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.870077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b392345c-7432-4562-a35a-5205eea9e26a-catalog-content\") pod \"community-operators-48dqc\" (UID: \"b392345c-7432-4562-a35a-5205eea9e26a\") " pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.870330 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b392345c-7432-4562-a35a-5205eea9e26a-utilities\") pod \"community-operators-48dqc\" (UID: \"b392345c-7432-4562-a35a-5205eea9e26a\") " pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:14 crc kubenswrapper[4805]: I0216 21:00:14.890467 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g29x\" (UniqueName: \"kubernetes.io/projected/b392345c-7432-4562-a35a-5205eea9e26a-kube-api-access-4g29x\") pod \"community-operators-48dqc\" (UID: \"b392345c-7432-4562-a35a-5205eea9e26a\") " pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:15 crc kubenswrapper[4805]: I0216 21:00:15.037471 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:15 crc kubenswrapper[4805]: I0216 21:00:15.243269 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pmfv7"] Feb 16 21:00:15 crc kubenswrapper[4805]: W0216 21:00:15.252336 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18fc0a7f_912c_4900_9bfe_9c2b5049eba4.slice/crio-de5b55369c4d38784a3033784b9a7355ab183915b87c16d360710ba4b85ee501 WatchSource:0}: Error finding container de5b55369c4d38784a3033784b9a7355ab183915b87c16d360710ba4b85ee501: Status 404 returned error can't find the container with id de5b55369c4d38784a3033784b9a7355ab183915b87c16d360710ba4b85ee501 Feb 16 21:00:15 crc kubenswrapper[4805]: I0216 21:00:15.451428 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48dqc"] Feb 16 21:00:15 crc kubenswrapper[4805]: W0216 21:00:15.458482 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb392345c_7432_4562_a35a_5205eea9e26a.slice/crio-04cf00903834b46d15271ffc6c5fb2de79e1a6baf90ca43893948527d7296fe9 WatchSource:0}: Error finding container 04cf00903834b46d15271ffc6c5fb2de79e1a6baf90ca43893948527d7296fe9: Status 404 returned error can't find the container with id 04cf00903834b46d15271ffc6c5fb2de79e1a6baf90ca43893948527d7296fe9 Feb 16 21:00:15 crc kubenswrapper[4805]: E0216 21:00:15.677814 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb392345c_7432_4562_a35a_5205eea9e26a.slice/crio-conmon-80f5d1069517f8bf5dc4574d496d2aacfd5329cad5a0578954468219c2689f44.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.112344 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dqc" event={"ID":"b392345c-7432-4562-a35a-5205eea9e26a","Type":"ContainerDied","Data":"80f5d1069517f8bf5dc4574d496d2aacfd5329cad5a0578954468219c2689f44"} Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.112332 4805 generic.go:334] "Generic (PLEG): container finished" podID="b392345c-7432-4562-a35a-5205eea9e26a" containerID="80f5d1069517f8bf5dc4574d496d2aacfd5329cad5a0578954468219c2689f44" exitCode=0 Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.112400 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dqc" event={"ID":"b392345c-7432-4562-a35a-5205eea9e26a","Type":"ContainerStarted","Data":"04cf00903834b46d15271ffc6c5fb2de79e1a6baf90ca43893948527d7296fe9"} Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.115245 4805 generic.go:334] "Generic (PLEG): container finished" podID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerID="4ed74231f6a2e0f9e3ce7b1b2475b0f442caed7f109d7e99a2adcd54d19c3f6f" exitCode=0 Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.116202 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmfv7" event={"ID":"18fc0a7f-912c-4900-9bfe-9c2b5049eba4","Type":"ContainerDied","Data":"4ed74231f6a2e0f9e3ce7b1b2475b0f442caed7f109d7e99a2adcd54d19c3f6f"} Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.116289 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmfv7" event={"ID":"18fc0a7f-912c-4900-9bfe-9c2b5049eba4","Type":"ContainerStarted","Data":"de5b55369c4d38784a3033784b9a7355ab183915b87c16d360710ba4b85ee501"} Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.893566 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r68dk"] Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.895150 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.897952 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r68dk"] Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.903134 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.995512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4fd91e-cf72-4fe2-9a42-078567fe7782-catalog-content\") pod \"redhat-marketplace-r68dk\" (UID: \"5b4fd91e-cf72-4fe2-9a42-078567fe7782\") " pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.996056 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4fd91e-cf72-4fe2-9a42-078567fe7782-utilities\") pod \"redhat-marketplace-r68dk\" (UID: \"5b4fd91e-cf72-4fe2-9a42-078567fe7782\") " pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:16 crc kubenswrapper[4805]: I0216 21:00:16.996102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mznd8\" (UniqueName: \"kubernetes.io/projected/5b4fd91e-cf72-4fe2-9a42-078567fe7782-kube-api-access-mznd8\") pod \"redhat-marketplace-r68dk\" (UID: \"5b4fd91e-cf72-4fe2-9a42-078567fe7782\") " pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.086922 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f2pbp"] Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.089467 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.096891 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4fd91e-cf72-4fe2-9a42-078567fe7782-catalog-content\") pod \"redhat-marketplace-r68dk\" (UID: \"5b4fd91e-cf72-4fe2-9a42-078567fe7782\") " pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.096959 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4fd91e-cf72-4fe2-9a42-078567fe7782-utilities\") pod \"redhat-marketplace-r68dk\" (UID: \"5b4fd91e-cf72-4fe2-9a42-078567fe7782\") " pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.096993 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mznd8\" (UniqueName: \"kubernetes.io/projected/5b4fd91e-cf72-4fe2-9a42-078567fe7782-kube-api-access-mznd8\") pod \"redhat-marketplace-r68dk\" (UID: \"5b4fd91e-cf72-4fe2-9a42-078567fe7782\") " pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.097032 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.097642 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4fd91e-cf72-4fe2-9a42-078567fe7782-catalog-content\") pod \"redhat-marketplace-r68dk\" (UID: \"5b4fd91e-cf72-4fe2-9a42-078567fe7782\") " pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.097882 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4fd91e-cf72-4fe2-9a42-078567fe7782-utilities\") pod \"redhat-marketplace-r68dk\" (UID: \"5b4fd91e-cf72-4fe2-9a42-078567fe7782\") " pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.130462 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mznd8\" (UniqueName: \"kubernetes.io/projected/5b4fd91e-cf72-4fe2-9a42-078567fe7782-kube-api-access-mznd8\") pod \"redhat-marketplace-r68dk\" (UID: \"5b4fd91e-cf72-4fe2-9a42-078567fe7782\") " pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.145134 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2pbp"] Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.147331 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dqc" event={"ID":"b392345c-7432-4562-a35a-5205eea9e26a","Type":"ContainerStarted","Data":"3e9627aad3bc76fb8a0378f357d1cfd6f1819b22fff033ca4e46eff2659a6eca"} Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.156034 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmfv7" event={"ID":"18fc0a7f-912c-4900-9bfe-9c2b5049eba4","Type":"ContainerStarted","Data":"510a2ef111c2b5b6c446153142f083111bc6a8ac8a81ac36c287cf4f3f59a3b5"} Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.198265 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxsqh\" (UniqueName: \"kubernetes.io/projected/58acc124-20af-4ab2-90ea-26cbdfe3b6eb-kube-api-access-mxsqh\") pod \"redhat-operators-f2pbp\" (UID: \"58acc124-20af-4ab2-90ea-26cbdfe3b6eb\") " pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.198614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58acc124-20af-4ab2-90ea-26cbdfe3b6eb-catalog-content\") pod \"redhat-operators-f2pbp\" (UID: \"58acc124-20af-4ab2-90ea-26cbdfe3b6eb\") " pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.198715 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58acc124-20af-4ab2-90ea-26cbdfe3b6eb-utilities\") pod \"redhat-operators-f2pbp\" (UID: \"58acc124-20af-4ab2-90ea-26cbdfe3b6eb\") " pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.227558 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.299374 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxsqh\" (UniqueName: \"kubernetes.io/projected/58acc124-20af-4ab2-90ea-26cbdfe3b6eb-kube-api-access-mxsqh\") pod \"redhat-operators-f2pbp\" (UID: \"58acc124-20af-4ab2-90ea-26cbdfe3b6eb\") " pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.299421 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58acc124-20af-4ab2-90ea-26cbdfe3b6eb-catalog-content\") pod \"redhat-operators-f2pbp\" (UID: \"58acc124-20af-4ab2-90ea-26cbdfe3b6eb\") " pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.299462 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58acc124-20af-4ab2-90ea-26cbdfe3b6eb-utilities\") pod \"redhat-operators-f2pbp\" (UID: \"58acc124-20af-4ab2-90ea-26cbdfe3b6eb\") " pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.300052 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58acc124-20af-4ab2-90ea-26cbdfe3b6eb-utilities\") pod \"redhat-operators-f2pbp\" (UID: \"58acc124-20af-4ab2-90ea-26cbdfe3b6eb\") " pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.300986 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58acc124-20af-4ab2-90ea-26cbdfe3b6eb-catalog-content\") pod \"redhat-operators-f2pbp\" (UID: \"58acc124-20af-4ab2-90ea-26cbdfe3b6eb\") " pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.320814 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxsqh\" (UniqueName: \"kubernetes.io/projected/58acc124-20af-4ab2-90ea-26cbdfe3b6eb-kube-api-access-mxsqh\") pod \"redhat-operators-f2pbp\" (UID: \"58acc124-20af-4ab2-90ea-26cbdfe3b6eb\") " pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.419322 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.629959 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r68dk"] Feb 16 21:00:17 crc kubenswrapper[4805]: W0216 21:00:17.637323 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b4fd91e_cf72_4fe2_9a42_078567fe7782.slice/crio-8813b84c477256116b6922c982716314094f71c720f5d2d522c947988004857b WatchSource:0}: Error finding container 8813b84c477256116b6922c982716314094f71c720f5d2d522c947988004857b: Status 404 returned error can't find the container with id 8813b84c477256116b6922c982716314094f71c720f5d2d522c947988004857b Feb 16 21:00:17 crc kubenswrapper[4805]: I0216 21:00:17.839433 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2pbp"] Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.180980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmfv7" event={"ID":"18fc0a7f-912c-4900-9bfe-9c2b5049eba4","Type":"ContainerDied","Data":"510a2ef111c2b5b6c446153142f083111bc6a8ac8a81ac36c287cf4f3f59a3b5"} Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.180908 4805 generic.go:334] "Generic (PLEG): container finished" podID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerID="510a2ef111c2b5b6c446153142f083111bc6a8ac8a81ac36c287cf4f3f59a3b5" exitCode=0 Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.186561 4805 generic.go:334] "Generic (PLEG): container finished" podID="b392345c-7432-4562-a35a-5205eea9e26a" containerID="3e9627aad3bc76fb8a0378f357d1cfd6f1819b22fff033ca4e46eff2659a6eca" exitCode=0 Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.186621 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dqc" event={"ID":"b392345c-7432-4562-a35a-5205eea9e26a","Type":"ContainerDied","Data":"3e9627aad3bc76fb8a0378f357d1cfd6f1819b22fff033ca4e46eff2659a6eca"} Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.190853 4805 generic.go:334] "Generic (PLEG): container finished" podID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" containerID="d22c12177453b033abf43633f7c29a3f3e35213b120541208e3bae57a1a88eb2" exitCode=0 Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.191007 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2pbp" event={"ID":"58acc124-20af-4ab2-90ea-26cbdfe3b6eb","Type":"ContainerDied","Data":"d22c12177453b033abf43633f7c29a3f3e35213b120541208e3bae57a1a88eb2"} Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.191042 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2pbp" event={"ID":"58acc124-20af-4ab2-90ea-26cbdfe3b6eb","Type":"ContainerStarted","Data":"5d0a3cae04716da4a6117ec7d7bbdf5b271d0acbd5601458bbe27b7660983ef9"} Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.193813 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b4fd91e-cf72-4fe2-9a42-078567fe7782" containerID="1e4d289444713cc4b57b4a9d536e1c842654c953d863d2e3d7c0f8c6511cffd8" exitCode=0 Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.193877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r68dk" event={"ID":"5b4fd91e-cf72-4fe2-9a42-078567fe7782","Type":"ContainerDied","Data":"1e4d289444713cc4b57b4a9d536e1c842654c953d863d2e3d7c0f8c6511cffd8"} Feb 16 21:00:18 crc kubenswrapper[4805]: I0216 21:00:18.193915 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r68dk" event={"ID":"5b4fd91e-cf72-4fe2-9a42-078567fe7782","Type":"ContainerStarted","Data":"8813b84c477256116b6922c982716314094f71c720f5d2d522c947988004857b"} Feb 16 21:00:19 crc kubenswrapper[4805]: I0216 21:00:19.201520 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b4fd91e-cf72-4fe2-9a42-078567fe7782" containerID="57b6cac707193f28aa0eead79ba5eee9122f529e74d46bd6c99ea4a82408e9f1" exitCode=0 Feb 16 21:00:19 crc kubenswrapper[4805]: I0216 21:00:19.201600 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r68dk" event={"ID":"5b4fd91e-cf72-4fe2-9a42-078567fe7782","Type":"ContainerDied","Data":"57b6cac707193f28aa0eead79ba5eee9122f529e74d46bd6c99ea4a82408e9f1"} Feb 16 21:00:19 crc kubenswrapper[4805]: I0216 21:00:19.204556 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmfv7" event={"ID":"18fc0a7f-912c-4900-9bfe-9c2b5049eba4","Type":"ContainerStarted","Data":"734fb17a148b8676e9f140553ba5f1af605fd35f9d3463f68cc459f4a535d0e5"} Feb 16 21:00:19 crc kubenswrapper[4805]: I0216 21:00:19.209054 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dqc" event={"ID":"b392345c-7432-4562-a35a-5205eea9e26a","Type":"ContainerStarted","Data":"1016bddd4c9d10b289ff2bc224c6195e4c72e32f50f26f74044a3b5b55fdfeba"} Feb 16 21:00:19 crc kubenswrapper[4805]: I0216 21:00:19.239525 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-48dqc" podStartSLOduration=2.772390287 podStartE2EDuration="5.239487028s" podCreationTimestamp="2026-02-16 21:00:14 +0000 UTC" firstStartedPulling="2026-02-16 21:00:16.115404147 +0000 UTC m=+233.934087442" lastFinishedPulling="2026-02-16 21:00:18.582500888 +0000 UTC m=+236.401184183" observedRunningTime="2026-02-16 21:00:19.238151502 +0000 UTC m=+237.056834797" watchObservedRunningTime="2026-02-16 21:00:19.239487028 +0000 UTC m=+237.058170323" Feb 16 21:00:19 crc kubenswrapper[4805]: I0216 21:00:19.263275 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pmfv7" podStartSLOduration=2.569308891 podStartE2EDuration="5.263253704s" podCreationTimestamp="2026-02-16 21:00:14 +0000 UTC" firstStartedPulling="2026-02-16 21:00:16.118838541 +0000 UTC m=+233.937521856" lastFinishedPulling="2026-02-16 21:00:18.812783374 +0000 UTC m=+236.631466669" observedRunningTime="2026-02-16 21:00:19.26051778 +0000 UTC m=+237.079201075" watchObservedRunningTime="2026-02-16 21:00:19.263253704 +0000 UTC m=+237.081936999" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.216503 4805 generic.go:334] "Generic (PLEG): container finished" podID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" containerID="2b4f62a77445e6a6c6de1ce5f9fbc52d0a81109517a00e8f077e3714c28d45d0" exitCode=0 Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.216546 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2pbp" event={"ID":"58acc124-20af-4ab2-90ea-26cbdfe3b6eb","Type":"ContainerDied","Data":"2b4f62a77445e6a6c6de1ce5f9fbc52d0a81109517a00e8f077e3714c28d45d0"} Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.220922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r68dk" event={"ID":"5b4fd91e-cf72-4fe2-9a42-078567fe7782","Type":"ContainerStarted","Data":"a50aaeb40c16c6fa9f2b98bdcb405e067ee6099ca72519404ca6ea93544a309e"} Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.257530 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r68dk" podStartSLOduration=2.8328579400000002 podStartE2EDuration="4.257511538s" podCreationTimestamp="2026-02-16 21:00:16 +0000 UTC" firstStartedPulling="2026-02-16 21:00:18.196816729 +0000 UTC m=+236.015500024" lastFinishedPulling="2026-02-16 21:00:19.621470327 +0000 UTC m=+237.440153622" observedRunningTime="2026-02-16 21:00:20.256747376 +0000 UTC m=+238.075430681" watchObservedRunningTime="2026-02-16 21:00:20.257511538 +0000 UTC m=+238.076194833" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.721160 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.722162 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.722356 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.722639 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.722701 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.722736 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.722797 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.722980 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00" gracePeriod=15 Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.723822 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.724066 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724083 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.724102 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724111 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.724125 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724138 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.724150 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724158 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.724171 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724179 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.724200 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724208 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.724218 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724228 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724369 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724397 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724409 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724421 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724432 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724442 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724453 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.724595 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.724606 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.798751 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: E0216 21:00:20.826274 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-f2pbp.1894d5cae2a0d37b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-f2pbp,UID:58acc124-20af-4ab2-90ea-26cbdfe3b6eb,APIVersion:v1,ResourceVersion:30150,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 21:00:20.825748347 +0000 UTC m=+238.644431642,LastTimestamp:2026-02-16 21:00:20.825748347 +0000 UTC m=+238.644431642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.851923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.851979 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.852011 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.852045 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.852075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.852097 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.852119 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.852346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953028 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953097 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953130 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953148 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953169 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953197 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953255 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953275 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953337 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953372 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953410 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953428 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953450 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953471 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:20 crc kubenswrapper[4805]: I0216 21:00:20.953492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.099568 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:21 crc kubenswrapper[4805]: W0216 21:00:21.127026 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-2aca69ce8e0f5a8c79a324e496a4173f618dadcc8164f531e14f6812fbb7aa7b WatchSource:0}: Error finding container 2aca69ce8e0f5a8c79a324e496a4173f618dadcc8164f531e14f6812fbb7aa7b: Status 404 returned error can't find the container with id 2aca69ce8e0f5a8c79a324e496a4173f618dadcc8164f531e14f6812fbb7aa7b Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.240063 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2pbp" event={"ID":"58acc124-20af-4ab2-90ea-26cbdfe3b6eb","Type":"ContainerStarted","Data":"6635da82357509cfa6924ce1df112ad471358b42a53fc5a2a6f9066765320fc7"} Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.241278 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.241503 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.244953 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.247808 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.248552 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00" exitCode=0 Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.248578 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9" exitCode=0 Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.248585 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374" exitCode=0 Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.248593 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e" exitCode=2 Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.248656 4805 scope.go:117] "RemoveContainer" containerID="3aeb3a5d47badf103ab0c199a11083cc21633f80d0abf00319f25f78138aee8a" Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.250915 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2aca69ce8e0f5a8c79a324e496a4173f618dadcc8164f531e14f6812fbb7aa7b"} Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.253325 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c83051b-f772-4ad8-8e02-8d51a3386b25" containerID="e9be8888058702baef2fe6313925163e3b6517720ea66990bfb1fa98a74b5ca0" exitCode=0 Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.253413 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5c83051b-f772-4ad8-8e02-8d51a3386b25","Type":"ContainerDied","Data":"e9be8888058702baef2fe6313925163e3b6517720ea66990bfb1fa98a74b5ca0"} Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.254207 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.254694 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:21 crc kubenswrapper[4805]: I0216 21:00:21.255048 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.259260 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d"} Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.259880 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4805]: E0216 21:00:22.259931 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.260160 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.262026 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.638940 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.639649 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.640056 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.774630 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c83051b-f772-4ad8-8e02-8d51a3386b25-kube-api-access\") pod \"5c83051b-f772-4ad8-8e02-8d51a3386b25\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.774981 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-var-lock\") pod \"5c83051b-f772-4ad8-8e02-8d51a3386b25\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.775035 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-kubelet-dir\") pod \"5c83051b-f772-4ad8-8e02-8d51a3386b25\" (UID: \"5c83051b-f772-4ad8-8e02-8d51a3386b25\") " Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.775363 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5c83051b-f772-4ad8-8e02-8d51a3386b25" (UID: "5c83051b-f772-4ad8-8e02-8d51a3386b25"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.775421 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c83051b-f772-4ad8-8e02-8d51a3386b25" (UID: "5c83051b-f772-4ad8-8e02-8d51a3386b25"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.782939 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c83051b-f772-4ad8-8e02-8d51a3386b25-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5c83051b-f772-4ad8-8e02-8d51a3386b25" (UID: "5c83051b-f772-4ad8-8e02-8d51a3386b25"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.876792 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c83051b-f772-4ad8-8e02-8d51a3386b25-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.876823 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:22 crc kubenswrapper[4805]: I0216 21:00:22.876831 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c83051b-f772-4ad8-8e02-8d51a3386b25-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.111221 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.111977 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.112477 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.112872 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.113142 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.280444 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.280537 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.280548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.280580 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.280586 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.280633 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.281066 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.281082 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.281090 4805 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.281338 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.282296 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172" exitCode=0 Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.282410 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.282624 4805 scope.go:117] "RemoveContainer" containerID="d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.284875 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.284914 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5c83051b-f772-4ad8-8e02-8d51a3386b25","Type":"ContainerDied","Data":"768dc565a32dd50c968c4f5a86e1a709e67eca7ebba9b0ed66abbedd5518da0b"} Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.284970 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="768dc565a32dd50c968c4f5a86e1a709e67eca7ebba9b0ed66abbedd5518da0b" Feb 16 21:00:23 crc kubenswrapper[4805]: E0216 21:00:23.286067 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.295006 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.295465 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.295889 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.300890 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.301092 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.301310 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.309234 4805 scope.go:117] "RemoveContainer" containerID="8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.320992 4805 scope.go:117] "RemoveContainer" containerID="35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.340458 4805 scope.go:117] "RemoveContainer" containerID="4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.366702 4805 scope.go:117] "RemoveContainer" containerID="7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.388700 4805 scope.go:117] "RemoveContainer" containerID="4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.407524 4805 scope.go:117] "RemoveContainer" containerID="d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00" Feb 16 21:00:23 crc kubenswrapper[4805]: E0216 21:00:23.407996 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\": container with ID starting with d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00 not found: ID does not exist" containerID="d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.408026 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00"} err="failed to get container status \"d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\": rpc error: code = NotFound desc = could not find container \"d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00\": container with ID starting with d667c47ba0950a661b337600e56b6f95a36a8744ede5738cf13463cbf27f9b00 not found: ID does not exist" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.408049 4805 scope.go:117] "RemoveContainer" containerID="8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9" Feb 16 21:00:23 crc kubenswrapper[4805]: E0216 21:00:23.408386 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\": container with ID starting with 8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9 not found: ID does not exist" containerID="8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.408416 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9"} err="failed to get container status \"8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\": rpc error: code = NotFound desc = could not find container \"8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9\": container with ID starting with 8ab083dfab0c48c3036aee5bfc6228f55236c26efecba6fd8be7fb4effa016f9 not found: ID does not exist" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.408430 4805 scope.go:117] "RemoveContainer" containerID="35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374" Feb 16 21:00:23 crc kubenswrapper[4805]: E0216 21:00:23.408706 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\": container with ID starting with 35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374 not found: ID does not exist" containerID="35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.408744 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374"} err="failed to get container status \"35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\": rpc error: code = NotFound desc = could not find container \"35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374\": container with ID starting with 35191bdcffd022faab05b955b21b081114813eba774fc3d44762aa6287861374 not found: ID does not exist" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.408756 4805 scope.go:117] "RemoveContainer" containerID="4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e" Feb 16 21:00:23 crc kubenswrapper[4805]: E0216 21:00:23.408993 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\": container with ID starting with 4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e not found: ID does not exist" containerID="4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.409015 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e"} err="failed to get container status \"4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\": rpc error: code = NotFound desc = could not find container \"4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e\": container with ID starting with 4510c0b4be989a4e723730113343fb3f19dd1e0344ef2da2d3e1f9daf38dab4e not found: ID does not exist" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.409027 4805 scope.go:117] "RemoveContainer" containerID="7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172" Feb 16 21:00:23 crc kubenswrapper[4805]: E0216 21:00:23.409558 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\": container with ID starting with 7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172 not found: ID does not exist" containerID="7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.409581 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172"} err="failed to get container status \"7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\": rpc error: code = NotFound desc = could not find container \"7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172\": container with ID starting with 7436d0eef74dd33afb0fcde826389ace4835f0a28df30f2bbce099e6d7d92172 not found: ID does not exist" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.409594 4805 scope.go:117] "RemoveContainer" containerID="4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef" Feb 16 21:00:23 crc kubenswrapper[4805]: E0216 21:00:23.409911 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\": container with ID starting with 4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef not found: ID does not exist" containerID="4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.409932 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef"} err="failed to get container status \"4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\": rpc error: code = NotFound desc = could not find container \"4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef\": container with ID starting with 4fb6faa6c04c09bd566fbe49060d26e75a5e128abd233b001ec5afb1aeac61ef not found: ID does not exist" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.608176 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.608689 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.609455 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:23 crc kubenswrapper[4805]: I0216 21:00:23.610383 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 21:00:23 crc kubenswrapper[4805]: E0216 21:00:23.906448 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-f2pbp.1894d5cae2a0d37b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-f2pbp,UID:58acc124-20af-4ab2-90ea-26cbdfe3b6eb,APIVersion:v1,ResourceVersion:30150,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 21:00:20.825748347 +0000 UTC m=+238.644431642,LastTimestamp:2026-02-16 21:00:20.825748347 +0000 UTC m=+238.644431642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 21:00:24 crc kubenswrapper[4805]: I0216 21:00:24.850593 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:24 crc kubenswrapper[4805]: I0216 21:00:24.850659 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:24 crc kubenswrapper[4805]: I0216 21:00:24.891009 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:24 crc kubenswrapper[4805]: I0216 21:00:24.891570 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4805]: I0216 21:00:24.891825 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:24 crc kubenswrapper[4805]: I0216 21:00:24.892099 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.023573 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.024055 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.024311 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.024553 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.024800 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.024832 4805 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.025030 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="200ms" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.037938 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.037979 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.088764 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.089463 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.090177 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.090660 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.091039 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.226083 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="400ms" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.306351 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:00:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:00:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:00:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:00:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.306868 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.307332 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.307679 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.308033 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.308067 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.339650 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-48dqc" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.340285 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.340598 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.341070 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.341450 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.353739 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.354234 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.354898 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.355671 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: I0216 21:00:25.356032 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:25 crc kubenswrapper[4805]: E0216 21:00:25.627374 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="800ms" Feb 16 21:00:26 crc kubenswrapper[4805]: E0216 21:00:26.428971 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="1.6s" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.228633 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.229068 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.282215 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.282713 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.283178 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.283574 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.283796 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.284009 4805 status_manager.go:851] "Failed to get status for pod" podUID="5b4fd91e-cf72-4fe2-9a42-078567fe7782" pod="openshift-marketplace/redhat-marketplace-r68dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r68dk\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.340694 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r68dk" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.341242 4805 status_manager.go:851] "Failed to get status for pod" podUID="5b4fd91e-cf72-4fe2-9a42-078567fe7782" pod="openshift-marketplace/redhat-marketplace-r68dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r68dk\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.341521 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.341857 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.342276 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.342549 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.420938 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.420984 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.456058 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.456556 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.456870 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.457257 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.457789 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:27 crc kubenswrapper[4805]: I0216 21:00:27.458169 4805 status_manager.go:851] "Failed to get status for pod" podUID="5b4fd91e-cf72-4fe2-9a42-078567fe7782" pod="openshift-marketplace/redhat-marketplace-r68dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r68dk\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4805]: E0216 21:00:28.030580 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="3.2s" Feb 16 21:00:28 crc kubenswrapper[4805]: I0216 21:00:28.378318 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f2pbp" Feb 16 21:00:28 crc kubenswrapper[4805]: I0216 21:00:28.379252 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4805]: I0216 21:00:28.381660 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4805]: I0216 21:00:28.382162 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4805]: I0216 21:00:28.382587 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:28 crc kubenswrapper[4805]: I0216 21:00:28.383122 4805 status_manager.go:851] "Failed to get status for pod" podUID="5b4fd91e-cf72-4fe2-9a42-078567fe7782" pod="openshift-marketplace/redhat-marketplace-r68dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r68dk\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:31 crc kubenswrapper[4805]: E0216 21:00:31.231452 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="6.4s" Feb 16 21:00:31 crc kubenswrapper[4805]: E0216 21:00:31.605737 4805 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" volumeName="registry-storage" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.597016 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.601646 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.602592 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.603194 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.603655 4805 status_manager.go:851] "Failed to get status for pod" podUID="5b4fd91e-cf72-4fe2-9a42-078567fe7782" pod="openshift-marketplace/redhat-marketplace-r68dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r68dk\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.603921 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.604205 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.604521 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.604807 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.605304 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.606061 4805 status_manager.go:851] "Failed to get status for pod" podUID="5b4fd91e-cf72-4fe2-9a42-078567fe7782" pod="openshift-marketplace/redhat-marketplace-r68dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r68dk\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.616267 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.616303 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:33 crc kubenswrapper[4805]: E0216 21:00:33.616713 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:33 crc kubenswrapper[4805]: I0216 21:00:33.617431 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:33 crc kubenswrapper[4805]: E0216 21:00:33.908044 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-f2pbp.1894d5cae2a0d37b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-f2pbp,UID:58acc124-20af-4ab2-90ea-26cbdfe3b6eb,APIVersion:v1,ResourceVersion:30150,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 21:00:20.825748347 +0000 UTC m=+238.644431642,LastTimestamp:2026-02-16 21:00:20.825748347 +0000 UTC m=+238.644431642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.346840 4805 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a44e63b11d853000dc85ecd0f72f0695b6ea0c723edb25a6732dda6127706727" exitCode=0 Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.346891 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a44e63b11d853000dc85ecd0f72f0695b6ea0c723edb25a6732dda6127706727"} Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.346921 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"09ec30a69441d7a7027eefce40bb2786b049d2272aa502f0ecf252cc9d264437"} Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.347235 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.347250 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:34 crc kubenswrapper[4805]: E0216 21:00:34.347610 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.347611 4805 status_manager.go:851] "Failed to get status for pod" podUID="58acc124-20af-4ab2-90ea-26cbdfe3b6eb" pod="openshift-marketplace/redhat-operators-f2pbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2pbp\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.348181 4805 status_manager.go:851] "Failed to get status for pod" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.348495 4805 status_manager.go:851] "Failed to get status for pod" podUID="b392345c-7432-4562-a35a-5205eea9e26a" pod="openshift-marketplace/community-operators-48dqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-48dqc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.348846 4805 status_manager.go:851] "Failed to get status for pod" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" pod="openshift-marketplace/certified-operators-pmfv7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pmfv7\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:34 crc kubenswrapper[4805]: I0216 21:00:34.349130 4805 status_manager.go:851] "Failed to get status for pod" podUID="5b4fd91e-cf72-4fe2-9a42-078567fe7782" pod="openshift-marketplace/redhat-marketplace-r68dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r68dk\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 16 21:00:35 crc kubenswrapper[4805]: I0216 21:00:35.355889 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 21:00:35 crc kubenswrapper[4805]: I0216 21:00:35.356234 4805 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05" exitCode=1 Feb 16 21:00:35 crc kubenswrapper[4805]: I0216 21:00:35.356307 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05"} Feb 16 21:00:35 crc kubenswrapper[4805]: I0216 21:00:35.356756 4805 scope.go:117] "RemoveContainer" containerID="36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05" Feb 16 21:00:35 crc kubenswrapper[4805]: I0216 21:00:35.360535 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"07e10d918c3a34e7ea0d748457b6a1f639a87da1cf615612699d769cd9287e8a"} Feb 16 21:00:35 crc kubenswrapper[4805]: I0216 21:00:35.360600 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef15a134c06887624d67bcab4e1b0101e6c2d38782f5f6513555af21b8598fe4"} Feb 16 21:00:35 crc kubenswrapper[4805]: I0216 21:00:35.360614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ea336f5160ae521dafc6c1e3ce840922e84438cd8cd800d6c298719be3672071"} Feb 16 21:00:35 crc kubenswrapper[4805]: I0216 21:00:35.360628 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6b17077d42ce4c7395ce101848f5163386ef38a061aaa5eba13ee6e256454b0a"} Feb 16 21:00:36 crc kubenswrapper[4805]: I0216 21:00:36.430494 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 21:00:36 crc kubenswrapper[4805]: I0216 21:00:36.430614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e24e109c0c485805be876f51af9e97117fce2494b2df0c339a65e2cb2b0ecd4d"} Feb 16 21:00:36 crc kubenswrapper[4805]: I0216 21:00:36.457818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8bd9e19f329f9ce05238566d3bf66ffbc4826f07c8b4499aa961c403c0773e56"} Feb 16 21:00:36 crc kubenswrapper[4805]: I0216 21:00:36.458799 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:36 crc kubenswrapper[4805]: I0216 21:00:36.458834 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:36 crc kubenswrapper[4805]: I0216 21:00:36.459127 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:38 crc kubenswrapper[4805]: I0216 21:00:38.481893 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:00:38 crc kubenswrapper[4805]: I0216 21:00:38.482111 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 21:00:38 crc kubenswrapper[4805]: I0216 21:00:38.482174 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 21:00:38 crc kubenswrapper[4805]: I0216 21:00:38.617909 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:38 crc kubenswrapper[4805]: I0216 21:00:38.617947 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:38 crc kubenswrapper[4805]: I0216 21:00:38.623376 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:40 crc kubenswrapper[4805]: I0216 21:00:40.516194 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:00:41 crc kubenswrapper[4805]: I0216 21:00:41.468519 4805 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:42 crc kubenswrapper[4805]: I0216 21:00:42.488751 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:42 crc kubenswrapper[4805]: I0216 21:00:42.489616 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:42 crc kubenswrapper[4805]: I0216 21:00:42.493436 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:00:43 crc kubenswrapper[4805]: I0216 21:00:43.495095 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:43 crc kubenswrapper[4805]: I0216 21:00:43.495932 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e75ed224-e9fe-421a-9fda-36c7b5dc70f8" Feb 16 21:00:43 crc kubenswrapper[4805]: I0216 21:00:43.650277 4805 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4ce993f7-4893-42d4-a074-ce54f31d1ccc" Feb 16 21:00:48 crc kubenswrapper[4805]: I0216 21:00:48.482021 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 21:00:48 crc kubenswrapper[4805]: I0216 21:00:48.482886 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 21:00:51 crc kubenswrapper[4805]: I0216 21:00:51.433226 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 21:00:51 crc kubenswrapper[4805]: I0216 21:00:51.748099 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 21:00:51 crc kubenswrapper[4805]: I0216 21:00:51.814027 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 21:00:52 crc kubenswrapper[4805]: I0216 21:00:52.475778 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 21:00:52 crc kubenswrapper[4805]: I0216 21:00:52.768340 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 21:00:52 crc kubenswrapper[4805]: I0216 21:00:52.960026 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 21:00:53 crc kubenswrapper[4805]: I0216 21:00:53.503484 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 21:00:53 crc kubenswrapper[4805]: I0216 21:00:53.597141 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 21:00:53 crc kubenswrapper[4805]: I0216 21:00:53.649075 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 21:00:53 crc kubenswrapper[4805]: I0216 21:00:53.755112 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 21:00:53 crc kubenswrapper[4805]: I0216 21:00:53.904673 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.059951 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.158294 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.221987 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.325322 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.425518 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.514112 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.573525 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.723514 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.723576 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.779964 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.800077 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.835462 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.892582 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 21:00:54 crc kubenswrapper[4805]: I0216 21:00:54.929502 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.039651 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.133506 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.171124 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.470672 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.530967 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.533328 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.624702 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.685175 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.775104 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.849741 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.867819 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.936077 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 21:00:55 crc kubenswrapper[4805]: I0216 21:00:55.969875 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.006075 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.019382 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.059223 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.074689 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.092510 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.142568 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.176835 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.177103 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.213663 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.230172 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.305278 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.328019 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.341540 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.443847 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.484191 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.660251 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.676536 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.810674 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.876971 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.968746 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 21:00:56 crc kubenswrapper[4805]: I0216 21:00:56.969678 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.019402 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.043918 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.139242 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.155847 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.185479 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.222852 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.247786 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.458373 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.569808 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.589034 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.753760 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.766550 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.823139 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.839329 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.865790 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.878196 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.932315 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 21:00:57 crc kubenswrapper[4805]: I0216 21:00:57.964335 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.025162 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.038310 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.163542 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.233924 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.277320 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.338762 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.339076 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.345347 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.424668 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.446904 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.463167 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.482393 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.482467 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.482534 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.483413 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e24e109c0c485805be876f51af9e97117fce2494b2df0c339a65e2cb2b0ecd4d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.483605 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e24e109c0c485805be876f51af9e97117fce2494b2df0c339a65e2cb2b0ecd4d" gracePeriod=30 Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.520394 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.538786 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.561879 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.565635 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.619091 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.625949 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.652782 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.663593 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.746338 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.760584 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.769356 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.810805 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.814309 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.815104 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.836634 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.837506 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.871822 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.917381 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.920172 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.945616 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 21:00:58 crc kubenswrapper[4805]: I0216 21:00:58.979690 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.051472 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.150617 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.199965 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.245287 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.312199 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.312357 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.413639 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.463449 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.556823 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.623166 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.652359 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.669190 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.674801 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.676205 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.903539 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.925044 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 21:00:59 crc kubenswrapper[4805]: I0216 21:00:59.948812 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.040022 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.076985 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.103273 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.107132 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.184637 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.186352 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.204915 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.257357 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.322544 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.337969 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.481455 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.524247 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.546828 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.562808 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.587964 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.669343 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.688633 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.755827 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.954763 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 21:01:00 crc kubenswrapper[4805]: I0216 21:01:00.976833 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.002340 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.014409 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.040738 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.055352 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.097919 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.273470 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.449164 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.450218 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.536695 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.694080 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.698088 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.730378 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.838867 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.947479 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 21:01:01 crc kubenswrapper[4805]: I0216 21:01:01.975807 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.010279 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.033262 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.040183 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.135061 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.193634 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.265389 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.269326 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.333023 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.366736 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.444843 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.500442 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.543635 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.636107 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.678805 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.715587 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.728118 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.811993 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.925604 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.942969 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.986523 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 21:01:02 crc kubenswrapper[4805]: I0216 21:01:02.999191 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.042048 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.042074 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.065601 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.071236 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f2pbp" podStartSLOduration=43.590332674 podStartE2EDuration="46.071202109s" podCreationTimestamp="2026-02-16 21:00:17 +0000 UTC" firstStartedPulling="2026-02-16 21:00:18.192477711 +0000 UTC m=+236.011161006" lastFinishedPulling="2026-02-16 21:00:20.673347136 +0000 UTC m=+238.492030441" observedRunningTime="2026-02-16 21:00:41.056370928 +0000 UTC m=+258.875054243" watchObservedRunningTime="2026-02-16 21:01:03.071202109 +0000 UTC m=+280.889885444" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.074845 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.074915 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.085393 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.102331 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.102315464 podStartE2EDuration="22.102315464s" podCreationTimestamp="2026-02-16 21:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:01:03.100828322 +0000 UTC m=+280.919511657" watchObservedRunningTime="2026-02-16 21:01:03.102315464 +0000 UTC m=+280.920998759" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.287253 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.302426 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.314872 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.331014 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.403385 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.461244 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.580822 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.595497 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.602015 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.605261 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.637793 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.772375 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.772660 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d" gracePeriod=5 Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.804639 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 21:01:03 crc kubenswrapper[4805]: I0216 21:01:03.809253 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.087481 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.114390 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.144649 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.192648 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.230128 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.248811 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.265531 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.289456 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.339872 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.369091 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.424211 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.586557 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.589530 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.630431 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.682357 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.700069 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.851734 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 21:01:04 crc kubenswrapper[4805]: I0216 21:01:04.974266 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.025467 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.152196 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.162581 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.164956 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.206634 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.255012 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.342037 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.464747 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.500954 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.797226 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.849648 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.866819 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.907641 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 21:01:05 crc kubenswrapper[4805]: I0216 21:01:05.908407 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 21:01:06 crc kubenswrapper[4805]: I0216 21:01:06.047622 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 21:01:06 crc kubenswrapper[4805]: I0216 21:01:06.078100 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 21:01:06 crc kubenswrapper[4805]: I0216 21:01:06.510250 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 21:01:06 crc kubenswrapper[4805]: I0216 21:01:06.547087 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 21:01:06 crc kubenswrapper[4805]: I0216 21:01:06.888844 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 21:01:06 crc kubenswrapper[4805]: I0216 21:01:06.985789 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.106226 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.157189 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.442242 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.446573 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.496080 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.502744 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.520168 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.531795 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.531995 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 21:01:07 crc kubenswrapper[4805]: I0216 21:01:07.826005 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 21:01:08 crc kubenswrapper[4805]: I0216 21:01:08.376848 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 21:01:08 crc kubenswrapper[4805]: I0216 21:01:08.378984 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 21:01:08 crc kubenswrapper[4805]: I0216 21:01:08.548889 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 21:01:08 crc kubenswrapper[4805]: I0216 21:01:08.617467 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 21:01:08 crc kubenswrapper[4805]: I0216 21:01:08.684577 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.362038 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.362547 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.550772 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.550898 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.550922 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.550948 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.550997 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.551098 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.551152 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.551169 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.551284 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.551655 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.551682 4805 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.551697 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.551711 4805 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.563083 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.616093 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.652939 4805 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.691407 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.691514 4805 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d" exitCode=137 Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.691591 4805 scope.go:117] "RemoveContainer" containerID="357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.691615 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.722700 4805 scope.go:117] "RemoveContainer" containerID="357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d" Feb 16 21:01:09 crc kubenswrapper[4805]: E0216 21:01:09.723186 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d\": container with ID starting with 357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d not found: ID does not exist" containerID="357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d" Feb 16 21:01:09 crc kubenswrapper[4805]: I0216 21:01:09.723256 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d"} err="failed to get container status \"357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d\": rpc error: code = NotFound desc = could not find container \"357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d\": container with ID starting with 357e96196e844c850796fc7df4b492172446ce04b6c6859a35c68a0003f4751d not found: ID does not exist" Feb 16 21:01:11 crc kubenswrapper[4805]: I0216 21:01:11.127043 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 21:01:23 crc kubenswrapper[4805]: I0216 21:01:23.363296 4805 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 21:01:28 crc kubenswrapper[4805]: I0216 21:01:28.837314 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 21:01:28 crc kubenswrapper[4805]: I0216 21:01:28.840037 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 21:01:28 crc kubenswrapper[4805]: I0216 21:01:28.840100 4805 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e24e109c0c485805be876f51af9e97117fce2494b2df0c339a65e2cb2b0ecd4d" exitCode=137 Feb 16 21:01:28 crc kubenswrapper[4805]: I0216 21:01:28.840138 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e24e109c0c485805be876f51af9e97117fce2494b2df0c339a65e2cb2b0ecd4d"} Feb 16 21:01:28 crc kubenswrapper[4805]: I0216 21:01:28.840170 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fda677262d584aa5f7eeb96df1c483e43b348020762a99d54a39af4ab487faa8"} Feb 16 21:01:28 crc kubenswrapper[4805]: I0216 21:01:28.840189 4805 scope.go:117] "RemoveContainer" containerID="36cad0e934526edb2afa041c15311e6b8fffd00bc1e9829165308756cdb84e05" Feb 16 21:01:29 crc kubenswrapper[4805]: I0216 21:01:29.848849 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 21:01:30 crc kubenswrapper[4805]: I0216 21:01:30.516478 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:01:38 crc kubenswrapper[4805]: I0216 21:01:38.481233 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:01:38 crc kubenswrapper[4805]: I0216 21:01:38.485182 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:01:38 crc kubenswrapper[4805]: I0216 21:01:38.902876 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.299823 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg"] Feb 16 21:01:54 crc kubenswrapper[4805]: E0216 21:01:54.300555 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.300568 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 21:01:54 crc kubenswrapper[4805]: E0216 21:01:54.300584 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" containerName="installer" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.300592 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" containerName="installer" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.300801 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c83051b-f772-4ad8-8e02-8d51a3386b25" containerName="installer" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.300818 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.301227 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.304350 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.304378 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.304810 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.304851 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.305554 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.313313 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg"] Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.476617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/78df62ef-eec8-4cb7-9898-dafbf29c47be-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-dsbzg\" (UID: \"78df62ef-eec8-4cb7-9898-dafbf29c47be\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.476666 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/78df62ef-eec8-4cb7-9898-dafbf29c47be-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-dsbzg\" (UID: \"78df62ef-eec8-4cb7-9898-dafbf29c47be\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.476923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vpk\" (UniqueName: \"kubernetes.io/projected/78df62ef-eec8-4cb7-9898-dafbf29c47be-kube-api-access-r9vpk\") pod \"cluster-monitoring-operator-6d5b84845-dsbzg\" (UID: \"78df62ef-eec8-4cb7-9898-dafbf29c47be\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.578513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/78df62ef-eec8-4cb7-9898-dafbf29c47be-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-dsbzg\" (UID: \"78df62ef-eec8-4cb7-9898-dafbf29c47be\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.578573 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/78df62ef-eec8-4cb7-9898-dafbf29c47be-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-dsbzg\" (UID: \"78df62ef-eec8-4cb7-9898-dafbf29c47be\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.578610 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9vpk\" (UniqueName: \"kubernetes.io/projected/78df62ef-eec8-4cb7-9898-dafbf29c47be-kube-api-access-r9vpk\") pod \"cluster-monitoring-operator-6d5b84845-dsbzg\" (UID: \"78df62ef-eec8-4cb7-9898-dafbf29c47be\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.579499 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/78df62ef-eec8-4cb7-9898-dafbf29c47be-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-dsbzg\" (UID: \"78df62ef-eec8-4cb7-9898-dafbf29c47be\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.587781 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/78df62ef-eec8-4cb7-9898-dafbf29c47be-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-dsbzg\" (UID: \"78df62ef-eec8-4cb7-9898-dafbf29c47be\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.593667 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9vpk\" (UniqueName: \"kubernetes.io/projected/78df62ef-eec8-4cb7-9898-dafbf29c47be-kube-api-access-r9vpk\") pod \"cluster-monitoring-operator-6d5b84845-dsbzg\" (UID: \"78df62ef-eec8-4cb7-9898-dafbf29c47be\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:54 crc kubenswrapper[4805]: I0216 21:01:54.620036 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" Feb 16 21:01:55 crc kubenswrapper[4805]: I0216 21:01:55.019295 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg"] Feb 16 21:01:56 crc kubenswrapper[4805]: I0216 21:01:56.014615 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" event={"ID":"78df62ef-eec8-4cb7-9898-dafbf29c47be","Type":"ContainerStarted","Data":"5a86bae20c43d445f5de58fc93d857d861a75042d97e4a50f9fc96578b96948d"} Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.450497 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ps5z8"] Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.451513 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.476682 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ps5z8"] Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.557873 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq"] Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.558672 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.560123 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.564906 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq"] Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.623303 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92m5k\" (UniqueName: \"kubernetes.io/projected/a27076c7-e210-4974-a9fb-1cceebf6584e-kube-api-access-92m5k\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.623527 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.623603 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a27076c7-e210-4974-a9fb-1cceebf6584e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.623663 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a27076c7-e210-4974-a9fb-1cceebf6584e-registry-certificates\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.623751 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a27076c7-e210-4974-a9fb-1cceebf6584e-bound-sa-token\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.623848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a27076c7-e210-4974-a9fb-1cceebf6584e-registry-tls\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.623924 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a27076c7-e210-4974-a9fb-1cceebf6584e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.623993 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a27076c7-e210-4974-a9fb-1cceebf6584e-trusted-ca\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.657227 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.725278 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92m5k\" (UniqueName: \"kubernetes.io/projected/a27076c7-e210-4974-a9fb-1cceebf6584e-kube-api-access-92m5k\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.725350 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a27076c7-e210-4974-a9fb-1cceebf6584e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.725392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a27076c7-e210-4974-a9fb-1cceebf6584e-registry-certificates\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.725429 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a27076c7-e210-4974-a9fb-1cceebf6584e-bound-sa-token\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.725456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a27076c7-e210-4974-a9fb-1cceebf6584e-registry-tls\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.725502 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a27076c7-e210-4974-a9fb-1cceebf6584e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.725526 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a27076c7-e210-4974-a9fb-1cceebf6584e-trusted-ca\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.725598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/432d6d0d-5b84-4654-bb1b-a214837b0532-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-49jsq\" (UID: \"432d6d0d-5b84-4654-bb1b-a214837b0532\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.726236 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a27076c7-e210-4974-a9fb-1cceebf6584e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.726687 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a27076c7-e210-4974-a9fb-1cceebf6584e-registry-certificates\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.726953 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a27076c7-e210-4974-a9fb-1cceebf6584e-trusted-ca\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.740350 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a27076c7-e210-4974-a9fb-1cceebf6584e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.740538 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a27076c7-e210-4974-a9fb-1cceebf6584e-registry-tls\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.741707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a27076c7-e210-4974-a9fb-1cceebf6584e-bound-sa-token\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.742304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92m5k\" (UniqueName: \"kubernetes.io/projected/a27076c7-e210-4974-a9fb-1cceebf6584e-kube-api-access-92m5k\") pod \"image-registry-66df7c8f76-ps5z8\" (UID: \"a27076c7-e210-4974-a9fb-1cceebf6584e\") " pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.769640 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.826700 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/432d6d0d-5b84-4654-bb1b-a214837b0532-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-49jsq\" (UID: \"432d6d0d-5b84-4654-bb1b-a214837b0532\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.829643 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/432d6d0d-5b84-4654-bb1b-a214837b0532-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-49jsq\" (UID: \"432d6d0d-5b84-4654-bb1b-a214837b0532\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" Feb 16 21:01:57 crc kubenswrapper[4805]: I0216 21:01:57.872947 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" Feb 16 21:01:58 crc kubenswrapper[4805]: I0216 21:01:58.051369 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" event={"ID":"78df62ef-eec8-4cb7-9898-dafbf29c47be","Type":"ContainerStarted","Data":"4065027283c468b8a5d6e6dad97fe7cc36920b2e32a4dbd304441e20b14cc06b"} Feb 16 21:01:58 crc kubenswrapper[4805]: I0216 21:01:58.074585 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dsbzg" podStartSLOduration=2.150503094 podStartE2EDuration="4.074567605s" podCreationTimestamp="2026-02-16 21:01:54 +0000 UTC" firstStartedPulling="2026-02-16 21:01:55.02690316 +0000 UTC m=+332.845586465" lastFinishedPulling="2026-02-16 21:01:56.950967681 +0000 UTC m=+334.769650976" observedRunningTime="2026-02-16 21:01:58.073287678 +0000 UTC m=+335.891970973" watchObservedRunningTime="2026-02-16 21:01:58.074567605 +0000 UTC m=+335.893250900" Feb 16 21:01:58 crc kubenswrapper[4805]: I0216 21:01:58.084894 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ps5z8"] Feb 16 21:01:58 crc kubenswrapper[4805]: I0216 21:01:58.423436 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq"] Feb 16 21:01:58 crc kubenswrapper[4805]: W0216 21:01:58.424356 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod432d6d0d_5b84_4654_bb1b_a214837b0532.slice/crio-fd95bab623138dcd80349f0c8510666db63e41f4ec33851416d60f9db268143a WatchSource:0}: Error finding container fd95bab623138dcd80349f0c8510666db63e41f4ec33851416d60f9db268143a: Status 404 returned error can't find the container with id fd95bab623138dcd80349f0c8510666db63e41f4ec33851416d60f9db268143a Feb 16 21:01:59 crc kubenswrapper[4805]: I0216 21:01:59.061030 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" event={"ID":"432d6d0d-5b84-4654-bb1b-a214837b0532","Type":"ContainerStarted","Data":"fd95bab623138dcd80349f0c8510666db63e41f4ec33851416d60f9db268143a"} Feb 16 21:01:59 crc kubenswrapper[4805]: I0216 21:01:59.064060 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" event={"ID":"a27076c7-e210-4974-a9fb-1cceebf6584e","Type":"ContainerStarted","Data":"cd8e16b428d40fbf347e969ee9ee71f50be9d39c389156932d4b9c90530194c1"} Feb 16 21:01:59 crc kubenswrapper[4805]: I0216 21:01:59.064147 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" event={"ID":"a27076c7-e210-4974-a9fb-1cceebf6584e","Type":"ContainerStarted","Data":"8798adf556f82cb6e172cea7977d8a15681b70004b9fa3fb3d00ea9aefe1612e"} Feb 16 21:01:59 crc kubenswrapper[4805]: I0216 21:01:59.064187 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:01:59 crc kubenswrapper[4805]: I0216 21:01:59.100593 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" podStartSLOduration=2.100573042 podStartE2EDuration="2.100573042s" podCreationTimestamp="2026-02-16 21:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:01:59.094833797 +0000 UTC m=+336.913517132" watchObservedRunningTime="2026-02-16 21:01:59.100573042 +0000 UTC m=+336.919256347" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.077298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" event={"ID":"432d6d0d-5b84-4654-bb1b-a214837b0532","Type":"ContainerStarted","Data":"f61c6cc9d534a6481cc14889e1a6213a300b65a81a6880ea20073dd585682eb9"} Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.077543 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.083229 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.095319 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-49jsq" podStartSLOduration=2.117713524 podStartE2EDuration="4.095291735s" podCreationTimestamp="2026-02-16 21:01:57 +0000 UTC" firstStartedPulling="2026-02-16 21:01:58.42686309 +0000 UTC m=+336.245546385" lastFinishedPulling="2026-02-16 21:02:00.404441271 +0000 UTC m=+338.223124596" observedRunningTime="2026-02-16 21:02:01.093305819 +0000 UTC m=+338.911989124" watchObservedRunningTime="2026-02-16 21:02:01.095291735 +0000 UTC m=+338.913975060" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.648327 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-6qvpb"] Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.649456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.651964 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.653418 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.653485 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.659257 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-6qvpb"] Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.797458 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz6mk\" (UniqueName: \"kubernetes.io/projected/71b01072-9a8c-4c2b-af64-dcd76e77e864-kube-api-access-sz6mk\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.797751 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/71b01072-9a8c-4c2b-af64-dcd76e77e864-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.797912 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b01072-9a8c-4c2b-af64-dcd76e77e864-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.798013 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/71b01072-9a8c-4c2b-af64-dcd76e77e864-metrics-client-ca\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.899636 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/71b01072-9a8c-4c2b-af64-dcd76e77e864-metrics-client-ca\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.899954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6mk\" (UniqueName: \"kubernetes.io/projected/71b01072-9a8c-4c2b-af64-dcd76e77e864-kube-api-access-sz6mk\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.900038 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/71b01072-9a8c-4c2b-af64-dcd76e77e864-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.900172 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b01072-9a8c-4c2b-af64-dcd76e77e864-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.901208 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/71b01072-9a8c-4c2b-af64-dcd76e77e864-metrics-client-ca\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.908407 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/71b01072-9a8c-4c2b-af64-dcd76e77e864-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.908430 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b01072-9a8c-4c2b-af64-dcd76e77e864-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.920564 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz6mk\" (UniqueName: \"kubernetes.io/projected/71b01072-9a8c-4c2b-af64-dcd76e77e864-kube-api-access-sz6mk\") pod \"prometheus-operator-db54df47d-6qvpb\" (UID: \"71b01072-9a8c-4c2b-af64-dcd76e77e864\") " pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:01 crc kubenswrapper[4805]: I0216 21:02:01.974066 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" Feb 16 21:02:02 crc kubenswrapper[4805]: I0216 21:02:02.422707 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-6qvpb"] Feb 16 21:02:02 crc kubenswrapper[4805]: W0216 21:02:02.431492 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b01072_9a8c_4c2b_af64_dcd76e77e864.slice/crio-b290bba155172d9ff8bada23cbb90b451d541f51fc566995cb7f19843de0527e WatchSource:0}: Error finding container b290bba155172d9ff8bada23cbb90b451d541f51fc566995cb7f19843de0527e: Status 404 returned error can't find the container with id b290bba155172d9ff8bada23cbb90b451d541f51fc566995cb7f19843de0527e Feb 16 21:02:03 crc kubenswrapper[4805]: I0216 21:02:03.096098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" event={"ID":"71b01072-9a8c-4c2b-af64-dcd76e77e864","Type":"ContainerStarted","Data":"b290bba155172d9ff8bada23cbb90b451d541f51fc566995cb7f19843de0527e"} Feb 16 21:02:04 crc kubenswrapper[4805]: I0216 21:02:04.102860 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" event={"ID":"71b01072-9a8c-4c2b-af64-dcd76e77e864","Type":"ContainerStarted","Data":"e3f6107c837cc48b84fc2aab2d634a482f44247dd6e9b431f0a6a9a30a50faef"} Feb 16 21:02:04 crc kubenswrapper[4805]: I0216 21:02:04.103232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" event={"ID":"71b01072-9a8c-4c2b-af64-dcd76e77e864","Type":"ContainerStarted","Data":"06d3ccc98de6389cff248dba22a1522f335fae70fd582bead8930d58a09ac3c2"} Feb 16 21:02:04 crc kubenswrapper[4805]: I0216 21:02:04.123015 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-6qvpb" podStartSLOduration=1.74342615 podStartE2EDuration="3.122987396s" podCreationTimestamp="2026-02-16 21:02:01 +0000 UTC" firstStartedPulling="2026-02-16 21:02:02.432917157 +0000 UTC m=+340.251600462" lastFinishedPulling="2026-02-16 21:02:03.812478403 +0000 UTC m=+341.631161708" observedRunningTime="2026-02-16 21:02:04.122794851 +0000 UTC m=+341.941478176" watchObservedRunningTime="2026-02-16 21:02:04.122987396 +0000 UTC m=+341.941670721" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.001737 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-z6js9"] Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.003573 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.005228 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.005507 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.013585 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-z6js9"] Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.043107 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-65748"] Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.044175 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.045930 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.046128 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.046536 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8"] Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.053634 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.061245 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.061479 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.061628 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.075813 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8"] Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178384 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-wtmp\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178439 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6a0276eb-947a-4573-9cf2-02171ab17893-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178464 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndrpt\" (UniqueName: \"kubernetes.io/projected/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-kube-api-access-ndrpt\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178483 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07e3cc8f-6a57-469d-8f46-80d76a12affa-sys\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178645 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178748 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-tls\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-textfile\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/07e3cc8f-6a57-469d-8f46-80d76a12affa-root\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178912 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/07e3cc8f-6a57-469d-8f46-80d76a12affa-metrics-client-ca\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178951 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.178986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmvxd\" (UniqueName: \"kubernetes.io/projected/07e3cc8f-6a57-469d-8f46-80d76a12affa-kube-api-access-wmvxd\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.179029 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a0276eb-947a-4573-9cf2-02171ab17893-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.179071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a0276eb-947a-4573-9cf2-02171ab17893-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.179097 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a0276eb-947a-4573-9cf2-02171ab17893-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.179120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmwts\" (UniqueName: \"kubernetes.io/projected/6a0276eb-947a-4573-9cf2-02171ab17893-kube-api-access-zmwts\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.179153 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.179197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6a0276eb-947a-4573-9cf2-02171ab17893-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.280413 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a0276eb-947a-4573-9cf2-02171ab17893-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.280472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a0276eb-947a-4573-9cf2-02171ab17893-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.280506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a0276eb-947a-4573-9cf2-02171ab17893-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.280577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmwts\" (UniqueName: \"kubernetes.io/projected/6a0276eb-947a-4573-9cf2-02171ab17893-kube-api-access-zmwts\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.280609 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.281048 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6a0276eb-947a-4573-9cf2-02171ab17893-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.281737 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a0276eb-947a-4573-9cf2-02171ab17893-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282236 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6a0276eb-947a-4573-9cf2-02171ab17893-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282308 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-wtmp\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282489 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-wtmp\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6a0276eb-947a-4573-9cf2-02171ab17893-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282542 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndrpt\" (UniqueName: \"kubernetes.io/projected/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-kube-api-access-ndrpt\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282572 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07e3cc8f-6a57-469d-8f46-80d76a12affa-sys\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282600 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-tls\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282676 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-textfile\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282704 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6a0276eb-947a-4573-9cf2-02171ab17893-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282787 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/07e3cc8f-6a57-469d-8f46-80d76a12affa-root\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282748 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/07e3cc8f-6a57-469d-8f46-80d76a12affa-root\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282858 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/07e3cc8f-6a57-469d-8f46-80d76a12affa-metrics-client-ca\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282898 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.282936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmvxd\" (UniqueName: \"kubernetes.io/projected/07e3cc8f-6a57-469d-8f46-80d76a12affa-kube-api-access-wmvxd\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.283053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07e3cc8f-6a57-469d-8f46-80d76a12affa-sys\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: E0216 21:02:06.283640 4805 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 16 21:02:06 crc kubenswrapper[4805]: E0216 21:02:06.283700 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-tls podName:07e3cc8f-6a57-469d-8f46-80d76a12affa nodeName:}" failed. No retries permitted until 2026-02-16 21:02:06.783687326 +0000 UTC m=+344.602370621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-tls") pod "node-exporter-65748" (UID: "07e3cc8f-6a57-469d-8f46-80d76a12affa") : secret "node-exporter-tls" not found Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.284364 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-textfile\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.285002 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.286481 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/07e3cc8f-6a57-469d-8f46-80d76a12affa-metrics-client-ca\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.288428 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a0276eb-947a-4573-9cf2-02171ab17893-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.288602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.288996 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.296624 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a0276eb-947a-4573-9cf2-02171ab17893-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.298289 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.302098 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndrpt\" (UniqueName: \"kubernetes.io/projected/1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1-kube-api-access-ndrpt\") pod \"openshift-state-metrics-566fddb674-z6js9\" (UID: \"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.305799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmwts\" (UniqueName: \"kubernetes.io/projected/6a0276eb-947a-4573-9cf2-02171ab17893-kube-api-access-zmwts\") pod \"kube-state-metrics-777cb5bd5d-rrpk8\" (UID: \"6a0276eb-947a-4573-9cf2-02171ab17893\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.306945 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmvxd\" (UniqueName: \"kubernetes.io/projected/07e3cc8f-6a57-469d-8f46-80d76a12affa-kube-api-access-wmvxd\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.327971 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.401149 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.634100 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8"] Feb 16 21:02:06 crc kubenswrapper[4805]: W0216 21:02:06.641691 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a0276eb_947a_4573_9cf2_02171ab17893.slice/crio-17ddf4d7eef9a121f597509e2046f2607132a9a242872088dd78c1e066d54c73 WatchSource:0}: Error finding container 17ddf4d7eef9a121f597509e2046f2607132a9a242872088dd78c1e066d54c73: Status 404 returned error can't find the container with id 17ddf4d7eef9a121f597509e2046f2607132a9a242872088dd78c1e066d54c73 Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.785423 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-z6js9"] Feb 16 21:02:06 crc kubenswrapper[4805]: W0216 21:02:06.786737 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb6776b_92d2_46ed_9d8e_e3a54b57a0f1.slice/crio-de6afeb39fe1f8e1ed51b912624a04b63c627f943643e26d7db57180e60eb039 WatchSource:0}: Error finding container de6afeb39fe1f8e1ed51b912624a04b63c627f943643e26d7db57180e60eb039: Status 404 returned error can't find the container with id de6afeb39fe1f8e1ed51b912624a04b63c627f943643e26d7db57180e60eb039 Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.788912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-tls\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.796276 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/07e3cc8f-6a57-469d-8f46-80d76a12affa-node-exporter-tls\") pod \"node-exporter-65748\" (UID: \"07e3cc8f-6a57-469d-8f46-80d76a12affa\") " pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:06 crc kubenswrapper[4805]: I0216 21:02:06.972470 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-65748" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.132666 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" event={"ID":"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1","Type":"ContainerStarted","Data":"dfeaeaa9a2c027ce12f73e463b32a82a989558e8311b75be29e7f0d06b1bc3e5"} Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.132732 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" event={"ID":"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1","Type":"ContainerStarted","Data":"7ac7b8d94f84aedfdbcae300a4d489e0d79c13ae82062bafb434f81f1cb26025"} Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.132746 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" event={"ID":"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1","Type":"ContainerStarted","Data":"de6afeb39fe1f8e1ed51b912624a04b63c627f943643e26d7db57180e60eb039"} Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.135409 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-65748" event={"ID":"07e3cc8f-6a57-469d-8f46-80d76a12affa","Type":"ContainerStarted","Data":"d58b960bbf774873d82f2bfe766dcbef772ea8df8f99ebe51e8418125ff704e9"} Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.140741 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" event={"ID":"6a0276eb-947a-4573-9cf2-02171ab17893","Type":"ContainerStarted","Data":"17ddf4d7eef9a121f597509e2046f2607132a9a242872088dd78c1e066d54c73"} Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.227703 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.230239 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.234137 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.234471 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.234649 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.234856 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.234959 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.235129 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.235297 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.244274 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.264771 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294666 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294711 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-web-config\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294772 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294796 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294812 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mldkm\" (UniqueName: \"kubernetes.io/projected/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-kube-api-access-mldkm\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294838 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294869 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-config-volume\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294887 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294902 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-config-out\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.294942 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.395788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-web-config\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396174 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396200 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396229 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396249 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mldkm\" (UniqueName: \"kubernetes.io/projected/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-kube-api-access-mldkm\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396280 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-config-volume\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396340 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396359 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-config-out\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396377 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.396404 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.397347 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.397886 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.398097 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.401211 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.403022 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-config-out\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.403365 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.409688 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.409774 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-web-config\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.410518 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.411855 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mldkm\" (UniqueName: \"kubernetes.io/projected/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-kube-api-access-mldkm\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.417439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.419093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/85fdd027-e8bd-4ad3-b25c-83d1f81657d7-config-volume\") pod \"alertmanager-main-0\" (UID: \"85fdd027-e8bd-4ad3-b25c-83d1f81657d7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:07 crc kubenswrapper[4805]: I0216 21:02:07.578806 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.051401 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 21:02:08 crc kubenswrapper[4805]: W0216 21:02:08.074819 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85fdd027_e8bd_4ad3_b25c_83d1f81657d7.slice/crio-4d9afd54321154fe1b8e90a5170b0096a05da7bd3118de0945caa52cd770f0ac WatchSource:0}: Error finding container 4d9afd54321154fe1b8e90a5170b0096a05da7bd3118de0945caa52cd770f0ac: Status 404 returned error can't find the container with id 4d9afd54321154fe1b8e90a5170b0096a05da7bd3118de0945caa52cd770f0ac Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.100325 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.100442 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.128897 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk"] Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.131966 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.134956 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.135018 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.135208 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.135245 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-61tjp4imrehj8" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.135378 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.135444 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.141101 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk"] Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.155401 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"85fdd027-e8bd-4ad3-b25c-83d1f81657d7","Type":"ContainerStarted","Data":"4d9afd54321154fe1b8e90a5170b0096a05da7bd3118de0945caa52cd770f0ac"} Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.231171 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.231228 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.231258 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-grpc-tls\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.231282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-tls\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.231305 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4546t\" (UniqueName: \"kubernetes.io/projected/2d04de0f-6fe1-4db1-aa9d-d6225839f104-kube-api-access-4546t\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.231340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d04de0f-6fe1-4db1-aa9d-d6225839f104-metrics-client-ca\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.231435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.231593 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.332712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d04de0f-6fe1-4db1-aa9d-d6225839f104-metrics-client-ca\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.332789 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.332843 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.332893 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.332913 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.332936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-grpc-tls\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.332962 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-tls\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.332983 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4546t\" (UniqueName: \"kubernetes.io/projected/2d04de0f-6fe1-4db1-aa9d-d6225839f104-kube-api-access-4546t\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.334366 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d04de0f-6fe1-4db1-aa9d-d6225839f104-metrics-client-ca\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.339518 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-tls\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.341232 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-grpc-tls\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.341843 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.342069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.349948 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.354432 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2d04de0f-6fe1-4db1-aa9d-d6225839f104-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.354508 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4546t\" (UniqueName: \"kubernetes.io/projected/2d04de0f-6fe1-4db1-aa9d-d6225839f104-kube-api-access-4546t\") pod \"thanos-querier-7f7b8bfdcc-twzqk\" (UID: \"2d04de0f-6fe1-4db1-aa9d-d6225839f104\") " pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:08 crc kubenswrapper[4805]: I0216 21:02:08.453849 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:09 crc kubenswrapper[4805]: I0216 21:02:09.161152 4805 generic.go:334] "Generic (PLEG): container finished" podID="07e3cc8f-6a57-469d-8f46-80d76a12affa" containerID="bdb5793f6c06bb8d0c40456dff2d917fb3f9ee2ba7c59e4a6bc05fe430e94bb4" exitCode=0 Feb 16 21:02:09 crc kubenswrapper[4805]: I0216 21:02:09.161626 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-65748" event={"ID":"07e3cc8f-6a57-469d-8f46-80d76a12affa","Type":"ContainerDied","Data":"bdb5793f6c06bb8d0c40456dff2d917fb3f9ee2ba7c59e4a6bc05fe430e94bb4"} Feb 16 21:02:09 crc kubenswrapper[4805]: I0216 21:02:09.165024 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" event={"ID":"6a0276eb-947a-4573-9cf2-02171ab17893","Type":"ContainerStarted","Data":"89476511aa5a75bf3fac94fdf06e32b4ff90f0d892059f0695db50b08fffc30a"} Feb 16 21:02:09 crc kubenswrapper[4805]: I0216 21:02:09.165192 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" event={"ID":"6a0276eb-947a-4573-9cf2-02171ab17893","Type":"ContainerStarted","Data":"d7b8e0391e5935b1d169914861abd13273acbf27522ab9b38057a0222c9a8811"} Feb 16 21:02:09 crc kubenswrapper[4805]: I0216 21:02:09.174468 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" event={"ID":"1cb6776b-92d2-46ed-9d8e-e3a54b57a0f1","Type":"ContainerStarted","Data":"0a25bfea2c4076fc7bf949615a816e3c389384fb79f78fb2520ce6f4521689cb"} Feb 16 21:02:09 crc kubenswrapper[4805]: I0216 21:02:09.197667 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-z6js9" podStartSLOduration=2.470431226 podStartE2EDuration="4.197651414s" podCreationTimestamp="2026-02-16 21:02:05 +0000 UTC" firstStartedPulling="2026-02-16 21:02:07.105996892 +0000 UTC m=+344.924680187" lastFinishedPulling="2026-02-16 21:02:08.83321704 +0000 UTC m=+346.651900375" observedRunningTime="2026-02-16 21:02:09.192281089 +0000 UTC m=+347.010964384" watchObservedRunningTime="2026-02-16 21:02:09.197651414 +0000 UTC m=+347.016334709" Feb 16 21:02:09 crc kubenswrapper[4805]: I0216 21:02:09.280436 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk"] Feb 16 21:02:09 crc kubenswrapper[4805]: W0216 21:02:09.291655 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d04de0f_6fe1_4db1_aa9d_d6225839f104.slice/crio-f335c728ffc6468a9e0abfb13f57d077d926b9972afad34b0253ea5664561cbc WatchSource:0}: Error finding container f335c728ffc6468a9e0abfb13f57d077d926b9972afad34b0253ea5664561cbc: Status 404 returned error can't find the container with id f335c728ffc6468a9e0abfb13f57d077d926b9972afad34b0253ea5664561cbc Feb 16 21:02:10 crc kubenswrapper[4805]: I0216 21:02:10.184066 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-65748" event={"ID":"07e3cc8f-6a57-469d-8f46-80d76a12affa","Type":"ContainerStarted","Data":"805f0912cb258c294bce4355463bd5f0481b7ecb263cf2b93450aecbc17fcaa8"} Feb 16 21:02:10 crc kubenswrapper[4805]: I0216 21:02:10.184619 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-65748" event={"ID":"07e3cc8f-6a57-469d-8f46-80d76a12affa","Type":"ContainerStarted","Data":"de944e2422d4721c3db4f01e6f9632e12a8925ae950e268a7361fbde8a6faf79"} Feb 16 21:02:10 crc kubenswrapper[4805]: I0216 21:02:10.186868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" event={"ID":"2d04de0f-6fe1-4db1-aa9d-d6225839f104","Type":"ContainerStarted","Data":"f335c728ffc6468a9e0abfb13f57d077d926b9972afad34b0253ea5664561cbc"} Feb 16 21:02:10 crc kubenswrapper[4805]: I0216 21:02:10.190969 4805 generic.go:334] "Generic (PLEG): container finished" podID="85fdd027-e8bd-4ad3-b25c-83d1f81657d7" containerID="121a2bf88c6a5f6e3a259bfce35a278c829b411151d51f2b59be8ffc80f93845" exitCode=0 Feb 16 21:02:10 crc kubenswrapper[4805]: I0216 21:02:10.191057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"85fdd027-e8bd-4ad3-b25c-83d1f81657d7","Type":"ContainerDied","Data":"121a2bf88c6a5f6e3a259bfce35a278c829b411151d51f2b59be8ffc80f93845"} Feb 16 21:02:10 crc kubenswrapper[4805]: I0216 21:02:10.194516 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" event={"ID":"6a0276eb-947a-4573-9cf2-02171ab17893","Type":"ContainerStarted","Data":"98ebf6081bbaafb19ff4400661158ef6681cb4c6e35e2500234c22e8d096f645"} Feb 16 21:02:10 crc kubenswrapper[4805]: I0216 21:02:10.221037 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-65748" podStartSLOduration=2.400530983 podStartE2EDuration="4.221016064s" podCreationTimestamp="2026-02-16 21:02:06 +0000 UTC" firstStartedPulling="2026-02-16 21:02:07.011697009 +0000 UTC m=+344.830380304" lastFinishedPulling="2026-02-16 21:02:08.83218206 +0000 UTC m=+346.650865385" observedRunningTime="2026-02-16 21:02:10.206435045 +0000 UTC m=+348.025118360" watchObservedRunningTime="2026-02-16 21:02:10.221016064 +0000 UTC m=+348.039699359" Feb 16 21:02:10 crc kubenswrapper[4805]: I0216 21:02:10.267937 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rrpk8" podStartSLOduration=2.079210879 podStartE2EDuration="4.267908933s" podCreationTimestamp="2026-02-16 21:02:06 +0000 UTC" firstStartedPulling="2026-02-16 21:02:06.643509477 +0000 UTC m=+344.462192772" lastFinishedPulling="2026-02-16 21:02:08.832207521 +0000 UTC m=+346.650890826" observedRunningTime="2026-02-16 21:02:10.267593854 +0000 UTC m=+348.086277149" watchObservedRunningTime="2026-02-16 21:02:10.267908933 +0000 UTC m=+348.086592258" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.314420 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-55c6df6485-gfrcq"] Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.315363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.320385 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.320580 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.320677 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.320860 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.321057 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-6v5a84eslirs7" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.321154 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-7z255" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.326831 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-55c6df6485-gfrcq"] Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.379888 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.379943 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-secret-metrics-server-tls\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.379986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-secret-metrics-client-certs\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.380012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-audit-log\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.380210 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-client-ca-bundle\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.380264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6sl5\" (UniqueName: \"kubernetes.io/projected/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-kube-api-access-h6sl5\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.380400 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-metrics-server-audit-profiles\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.481494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-secret-metrics-server-tls\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.481538 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-secret-metrics-client-certs\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.481557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-audit-log\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.481599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-client-ca-bundle\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.481617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6sl5\" (UniqueName: \"kubernetes.io/projected/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-kube-api-access-h6sl5\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.481650 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-metrics-server-audit-profiles\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.481680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.482244 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-audit-log\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.482617 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.483695 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-metrics-server-audit-profiles\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.487143 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-secret-metrics-client-certs\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.489927 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-secret-metrics-server-tls\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.490193 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-client-ca-bundle\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.502868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6sl5\" (UniqueName: \"kubernetes.io/projected/77b58813-5f6d-481e-8e94-9cd0ad12ff8e-kube-api-access-h6sl5\") pod \"metrics-server-55c6df6485-gfrcq\" (UID: \"77b58813-5f6d-481e-8e94-9cd0ad12ff8e\") " pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.673262 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.839053 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2"] Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.840079 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.846018 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.846265 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.849144 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2"] Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.886971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a4ce271f-2c77-40b7-9e8e-1a38456ca667-monitoring-plugin-cert\") pod \"monitoring-plugin-65cccfcb6c-nfsh2\" (UID: \"a4ce271f-2c77-40b7-9e8e-1a38456ca667\") " pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.989676 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a4ce271f-2c77-40b7-9e8e-1a38456ca667-monitoring-plugin-cert\") pod \"monitoring-plugin-65cccfcb6c-nfsh2\" (UID: \"a4ce271f-2c77-40b7-9e8e-1a38456ca667\") " pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" Feb 16 21:02:11 crc kubenswrapper[4805]: I0216 21:02:11.993196 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a4ce271f-2c77-40b7-9e8e-1a38456ca667-monitoring-plugin-cert\") pod \"monitoring-plugin-65cccfcb6c-nfsh2\" (UID: \"a4ce271f-2c77-40b7-9e8e-1a38456ca667\") " pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.163638 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.354156 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.356116 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.360121 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-h8fcb" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.360361 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.360691 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.360885 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.360995 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.361473 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.361694 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.361868 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.362102 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.362237 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-bf93kmne9g0ju" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.362897 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.366912 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.367256 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.380873 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.397968 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398018 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398049 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-web-config\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398093 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/422b0daf-3026-4543-85de-67c794389145-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398140 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-config\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398168 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398198 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398221 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398251 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/422b0daf-3026-4543-85de-67c794389145-config-out\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398272 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398329 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398355 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398378 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lwsb\" (UniqueName: \"kubernetes.io/projected/422b0daf-3026-4543-85de-67c794389145-kube-api-access-6lwsb\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398418 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398439 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/422b0daf-3026-4543-85de-67c794389145-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.398462 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499733 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-config\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499776 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499822 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499843 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/422b0daf-3026-4543-85de-67c794389145-config-out\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499857 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499877 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499956 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lwsb\" (UniqueName: \"kubernetes.io/projected/422b0daf-3026-4543-85de-67c794389145-kube-api-access-6lwsb\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.499985 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.500002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/422b0daf-3026-4543-85de-67c794389145-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.500018 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.500048 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.500063 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.500080 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-web-config\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.500097 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.500109 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/422b0daf-3026-4543-85de-67c794389145-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.501800 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.502405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.502691 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.504021 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.504191 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.504259 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/422b0daf-3026-4543-85de-67c794389145-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.505457 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/422b0daf-3026-4543-85de-67c794389145-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.506005 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.506040 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.506459 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.507859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.507897 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-config\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.508154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/422b0daf-3026-4543-85de-67c794389145-config-out\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.509744 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/422b0daf-3026-4543-85de-67c794389145-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.517775 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.517822 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.518166 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/422b0daf-3026-4543-85de-67c794389145-web-config\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.524252 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lwsb\" (UniqueName: \"kubernetes.io/projected/422b0daf-3026-4543-85de-67c794389145-kube-api-access-6lwsb\") pod \"prometheus-k8s-0\" (UID: \"422b0daf-3026-4543-85de-67c794389145\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:12 crc kubenswrapper[4805]: I0216 21:02:12.671703 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:13 crc kubenswrapper[4805]: I0216 21:02:13.222425 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" event={"ID":"2d04de0f-6fe1-4db1-aa9d-d6225839f104","Type":"ContainerStarted","Data":"51a2f1ac2e1e3d6d5ded2869736b93bd185118cc4a95293953e7804b31d151d8"} Feb 16 21:02:13 crc kubenswrapper[4805]: I0216 21:02:13.348049 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 21:02:13 crc kubenswrapper[4805]: W0216 21:02:13.351618 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod422b0daf_3026_4543_85de_67c794389145.slice/crio-9d7c6c113d594f28f0a07ac1bf4a937382e87eb824cb3aa204cb3778b7f114cc WatchSource:0}: Error finding container 9d7c6c113d594f28f0a07ac1bf4a937382e87eb824cb3aa204cb3778b7f114cc: Status 404 returned error can't find the container with id 9d7c6c113d594f28f0a07ac1bf4a937382e87eb824cb3aa204cb3778b7f114cc Feb 16 21:02:13 crc kubenswrapper[4805]: I0216 21:02:13.425405 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2"] Feb 16 21:02:13 crc kubenswrapper[4805]: I0216 21:02:13.437962 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-55c6df6485-gfrcq"] Feb 16 21:02:13 crc kubenswrapper[4805]: W0216 21:02:13.441971 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4ce271f_2c77_40b7_9e8e_1a38456ca667.slice/crio-c7e82da169825715abdb62d3a3c7ddd1066ea352ed9899501d2ad21764bd1343 WatchSource:0}: Error finding container c7e82da169825715abdb62d3a3c7ddd1066ea352ed9899501d2ad21764bd1343: Status 404 returned error can't find the container with id c7e82da169825715abdb62d3a3c7ddd1066ea352ed9899501d2ad21764bd1343 Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.239294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" event={"ID":"2d04de0f-6fe1-4db1-aa9d-d6225839f104","Type":"ContainerStarted","Data":"13f580f1fcf2b8f64b7ab295f8f5b54d8f01a3626e53f6234b121ea5b1e33df1"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.239679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" event={"ID":"2d04de0f-6fe1-4db1-aa9d-d6225839f104","Type":"ContainerStarted","Data":"b66acdd52b8648431fed5a2b5302f79be88a6ecb56247182425a9b58565951c5"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.241052 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" event={"ID":"a4ce271f-2c77-40b7-9e8e-1a38456ca667","Type":"ContainerStarted","Data":"c7e82da169825715abdb62d3a3c7ddd1066ea352ed9899501d2ad21764bd1343"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.244425 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"85fdd027-e8bd-4ad3-b25c-83d1f81657d7","Type":"ContainerStarted","Data":"26022500f181f92cf893c7d452e05df57502c4cd943ef7db445b8d40e0ae20af"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.244459 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"85fdd027-e8bd-4ad3-b25c-83d1f81657d7","Type":"ContainerStarted","Data":"2a7dc73a1d1641dbd3edf489444c97c06982c67904aa7bbfb97c727970d486ea"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.244474 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"85fdd027-e8bd-4ad3-b25c-83d1f81657d7","Type":"ContainerStarted","Data":"9370eacab6b77163becaadaaa4fdc095482bed16df9833a37ebe6e14af958388"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.244488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"85fdd027-e8bd-4ad3-b25c-83d1f81657d7","Type":"ContainerStarted","Data":"d7404a04fb1b7b3ae33742552027a279c7cc0093401de18bd4263a106672f9dd"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.244502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"85fdd027-e8bd-4ad3-b25c-83d1f81657d7","Type":"ContainerStarted","Data":"a99e300c41c7036eddb08a6d58ec95935c26274bc13cde69c0f4f82a231847a7"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.245537 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" event={"ID":"77b58813-5f6d-481e-8e94-9cd0ad12ff8e","Type":"ContainerStarted","Data":"e82dc5529aa12b00e9663df38d236b2a3d8ad669ecfed1b8b340f5ef3d13f739"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.247003 4805 generic.go:334] "Generic (PLEG): container finished" podID="422b0daf-3026-4543-85de-67c794389145" containerID="57ea3843f83776ac25c206d37224a8c21cfce2a550e30b9f85b3b3909a21c11c" exitCode=0 Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.247036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"422b0daf-3026-4543-85de-67c794389145","Type":"ContainerDied","Data":"57ea3843f83776ac25c206d37224a8c21cfce2a550e30b9f85b3b3909a21c11c"} Feb 16 21:02:14 crc kubenswrapper[4805]: I0216 21:02:14.247054 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"422b0daf-3026-4543-85de-67c794389145","Type":"ContainerStarted","Data":"9d7c6c113d594f28f0a07ac1bf4a937382e87eb824cb3aa204cb3778b7f114cc"} Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.264595 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"85fdd027-e8bd-4ad3-b25c-83d1f81657d7","Type":"ContainerStarted","Data":"cb8285be0ed9ef84e3e6248a60e30948a3c1da40929b87a1fffb53301e0f443e"} Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.269412 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" event={"ID":"77b58813-5f6d-481e-8e94-9cd0ad12ff8e","Type":"ContainerStarted","Data":"cf72f7e4873e02328cb33124bf02f2e5f587cd5d6d3aaf4404bcd2ca297cd9c5"} Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.272252 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" event={"ID":"2d04de0f-6fe1-4db1-aa9d-d6225839f104","Type":"ContainerStarted","Data":"7aa5b0a8396a3006b693706babfd77bee1e8f8f318671e3c916d4c1afeb7a2ad"} Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.272304 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" event={"ID":"2d04de0f-6fe1-4db1-aa9d-d6225839f104","Type":"ContainerStarted","Data":"9d11b161f39d33406fffb54396a2bd2f306dd9f1ba21f1ff58cc9d21ebb021c5"} Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.272318 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" event={"ID":"2d04de0f-6fe1-4db1-aa9d-d6225839f104","Type":"ContainerStarted","Data":"cfc247e802682ad3345420a99d28741ba14f370e8bf0c9bce6efe47f77426272"} Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.272354 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.273530 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" event={"ID":"a4ce271f-2c77-40b7-9e8e-1a38456ca667","Type":"ContainerStarted","Data":"c03150043c6ad31c35f713e6abb3f9160bd1cb24d54e44fc480bbc090eb44294"} Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.274433 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.280737 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.294781 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=1.8503614929999999 podStartE2EDuration="9.294760713s" podCreationTimestamp="2026-02-16 21:02:07 +0000 UTC" firstStartedPulling="2026-02-16 21:02:08.080042583 +0000 UTC m=+345.898725878" lastFinishedPulling="2026-02-16 21:02:15.524441803 +0000 UTC m=+353.343125098" observedRunningTime="2026-02-16 21:02:16.290346356 +0000 UTC m=+354.109029661" watchObservedRunningTime="2026-02-16 21:02:16.294760713 +0000 UTC m=+354.113444008" Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.332188 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" podStartSLOduration=3.252634835 podStartE2EDuration="5.332167729s" podCreationTimestamp="2026-02-16 21:02:11 +0000 UTC" firstStartedPulling="2026-02-16 21:02:13.449218623 +0000 UTC m=+351.267901918" lastFinishedPulling="2026-02-16 21:02:15.528751517 +0000 UTC m=+353.347434812" observedRunningTime="2026-02-16 21:02:16.314358297 +0000 UTC m=+354.133041592" watchObservedRunningTime="2026-02-16 21:02:16.332167729 +0000 UTC m=+354.150851044" Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.333852 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-65cccfcb6c-nfsh2" podStartSLOduration=3.244725888 podStartE2EDuration="5.333838907s" podCreationTimestamp="2026-02-16 21:02:11 +0000 UTC" firstStartedPulling="2026-02-16 21:02:13.444479786 +0000 UTC m=+351.263163081" lastFinishedPulling="2026-02-16 21:02:15.533592805 +0000 UTC m=+353.352276100" observedRunningTime="2026-02-16 21:02:16.328587216 +0000 UTC m=+354.147270521" watchObservedRunningTime="2026-02-16 21:02:16.333838907 +0000 UTC m=+354.152522212" Feb 16 21:02:16 crc kubenswrapper[4805]: I0216 21:02:16.356495 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" podStartSLOduration=2.123999843 podStartE2EDuration="8.356475488s" podCreationTimestamp="2026-02-16 21:02:08 +0000 UTC" firstStartedPulling="2026-02-16 21:02:09.293676787 +0000 UTC m=+347.112360082" lastFinishedPulling="2026-02-16 21:02:15.526152442 +0000 UTC m=+353.344835727" observedRunningTime="2026-02-16 21:02:16.354323996 +0000 UTC m=+354.173007341" watchObservedRunningTime="2026-02-16 21:02:16.356475488 +0000 UTC m=+354.175158793" Feb 16 21:02:17 crc kubenswrapper[4805]: I0216 21:02:17.776372 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-ps5z8" Feb 16 21:02:17 crc kubenswrapper[4805]: I0216 21:02:17.834659 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w44f5"] Feb 16 21:02:18 crc kubenswrapper[4805]: I0216 21:02:18.289526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"422b0daf-3026-4543-85de-67c794389145","Type":"ContainerStarted","Data":"ab5bd6888afa0e05e86d199ba3880c6d34497335b7c61d0169c2c7183f56de8e"} Feb 16 21:02:18 crc kubenswrapper[4805]: I0216 21:02:18.468453 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7f7b8bfdcc-twzqk" Feb 16 21:02:19 crc kubenswrapper[4805]: I0216 21:02:19.305648 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"422b0daf-3026-4543-85de-67c794389145","Type":"ContainerStarted","Data":"49e45ea60f01939df646a464b9c5f4470a53393a4c9c94a8db7f4d1efbd440be"} Feb 16 21:02:19 crc kubenswrapper[4805]: I0216 21:02:19.306117 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"422b0daf-3026-4543-85de-67c794389145","Type":"ContainerStarted","Data":"3a103ce5bb2e3735730f3271cdfef0e99efc0d6357f263aa4dfa6d20bc516828"} Feb 16 21:02:19 crc kubenswrapper[4805]: I0216 21:02:19.306140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"422b0daf-3026-4543-85de-67c794389145","Type":"ContainerStarted","Data":"4cbbc83161998147327115d116dcfb433811c8c9de84ff34df89685a8cf8899b"} Feb 16 21:02:19 crc kubenswrapper[4805]: I0216 21:02:19.306157 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"422b0daf-3026-4543-85de-67c794389145","Type":"ContainerStarted","Data":"8ff2c1dd462a564293f2814fe9d666e53cbc1563328bcc34a26ce22e036902b6"} Feb 16 21:02:19 crc kubenswrapper[4805]: I0216 21:02:19.306174 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"422b0daf-3026-4543-85de-67c794389145","Type":"ContainerStarted","Data":"0c4be5f23775c78780171c8f0f9de908c190dc4206e7d5df925aa6964e868193"} Feb 16 21:02:19 crc kubenswrapper[4805]: I0216 21:02:19.363542 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.720121572 podStartE2EDuration="7.363511695s" podCreationTimestamp="2026-02-16 21:02:12 +0000 UTC" firstStartedPulling="2026-02-16 21:02:14.248457555 +0000 UTC m=+352.067140860" lastFinishedPulling="2026-02-16 21:02:17.891847688 +0000 UTC m=+355.710530983" observedRunningTime="2026-02-16 21:02:19.35849798 +0000 UTC m=+357.177181315" watchObservedRunningTime="2026-02-16 21:02:19.363511695 +0000 UTC m=+357.182195010" Feb 16 21:02:22 crc kubenswrapper[4805]: I0216 21:02:22.672116 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:02:31 crc kubenswrapper[4805]: I0216 21:02:31.673756 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:31 crc kubenswrapper[4805]: I0216 21:02:31.674251 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.629131 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c88b4c759-5kqtq"] Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.630393 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.658478 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c88b4c759-5kqtq"] Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.808350 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-trusted-ca-bundle\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.808494 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-config\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.808662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-oauth-serving-cert\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.808790 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-oauth-config\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.808872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-serving-cert\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.808972 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cvdh\" (UniqueName: \"kubernetes.io/projected/5016a697-f4a8-4d27-a16a-3e91a257fc94-kube-api-access-2cvdh\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.809007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-service-ca\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.909996 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-service-ca\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.910140 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-trusted-ca-bundle\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.910197 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-config\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.910248 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-oauth-serving-cert\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.910291 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-oauth-config\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.910337 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-serving-cert\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.910396 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cvdh\" (UniqueName: \"kubernetes.io/projected/5016a697-f4a8-4d27-a16a-3e91a257fc94-kube-api-access-2cvdh\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.912070 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-config\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.912441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-trusted-ca-bundle\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.913923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-service-ca\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.914698 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-oauth-serving-cert\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.919897 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-serving-cert\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.921451 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-oauth-config\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.934034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cvdh\" (UniqueName: \"kubernetes.io/projected/5016a697-f4a8-4d27-a16a-3e91a257fc94-kube-api-access-2cvdh\") pod \"console-5c88b4c759-5kqtq\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:36 crc kubenswrapper[4805]: I0216 21:02:36.956784 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:37 crc kubenswrapper[4805]: W0216 21:02:37.280943 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5016a697_f4a8_4d27_a16a_3e91a257fc94.slice/crio-d43d11a8f00fc5505fe187d8bc16ab4a3582a4597f257349d30d761c543a0e8d WatchSource:0}: Error finding container d43d11a8f00fc5505fe187d8bc16ab4a3582a4597f257349d30d761c543a0e8d: Status 404 returned error can't find the container with id d43d11a8f00fc5505fe187d8bc16ab4a3582a4597f257349d30d761c543a0e8d Feb 16 21:02:37 crc kubenswrapper[4805]: I0216 21:02:37.283057 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c88b4c759-5kqtq"] Feb 16 21:02:37 crc kubenswrapper[4805]: I0216 21:02:37.465019 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c88b4c759-5kqtq" event={"ID":"5016a697-f4a8-4d27-a16a-3e91a257fc94","Type":"ContainerStarted","Data":"d43d11a8f00fc5505fe187d8bc16ab4a3582a4597f257349d30d761c543a0e8d"} Feb 16 21:02:38 crc kubenswrapper[4805]: I0216 21:02:38.100488 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:02:38 crc kubenswrapper[4805]: I0216 21:02:38.101065 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:02:38 crc kubenswrapper[4805]: I0216 21:02:38.473778 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c88b4c759-5kqtq" event={"ID":"5016a697-f4a8-4d27-a16a-3e91a257fc94","Type":"ContainerStarted","Data":"9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6"} Feb 16 21:02:38 crc kubenswrapper[4805]: I0216 21:02:38.508550 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c88b4c759-5kqtq" podStartSLOduration=2.508462717 podStartE2EDuration="2.508462717s" podCreationTimestamp="2026-02-16 21:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:02:38.498238993 +0000 UTC m=+376.316922328" watchObservedRunningTime="2026-02-16 21:02:38.508462717 +0000 UTC m=+376.327146052" Feb 16 21:02:42 crc kubenswrapper[4805]: I0216 21:02:42.889996 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" podUID="ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" containerName="registry" containerID="cri-o://7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5" gracePeriod=30 Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.335149 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.516224 4805 generic.go:334] "Generic (PLEG): container finished" podID="ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" containerID="7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5" exitCode=0 Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.516271 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" event={"ID":"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729","Type":"ContainerDied","Data":"7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5"} Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.516311 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" event={"ID":"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729","Type":"ContainerDied","Data":"fec177d927ec213c95dbc03510073b0bf6bae59f00b4b3dce2a9047cbef53b9c"} Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.516306 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w44f5" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.516410 4805 scope.go:117] "RemoveContainer" containerID="7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.528986 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-installation-pull-secrets\") pod \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.529049 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-bound-sa-token\") pod \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.529112 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-tls\") pod \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.529154 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-certificates\") pod \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.529185 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns5n4\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-kube-api-access-ns5n4\") pod \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.529246 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-ca-trust-extracted\") pod \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.529384 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.529484 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-trusted-ca\") pod \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\" (UID: \"ea3b5e66-34bb-401f-bfa1-98bfb6b4b729\") " Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.537102 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.537177 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.548655 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-kube-api-access-ns5n4" (OuterVolumeSpecName: "kube-api-access-ns5n4") pod "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729"). InnerVolumeSpecName "kube-api-access-ns5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.553392 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.560529 4805 scope.go:117] "RemoveContainer" containerID="7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5" Feb 16 21:02:43 crc kubenswrapper[4805]: E0216 21:02:43.561604 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5\": container with ID starting with 7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5 not found: ID does not exist" containerID="7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.561685 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5"} err="failed to get container status \"7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5\": rpc error: code = NotFound desc = could not find container \"7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5\": container with ID starting with 7547d2def715595ec6641ef81f430c5c6a9bb2aef407af3d260bd2e8131ec1d5 not found: ID does not exist" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.564572 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.565041 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.567806 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.572137 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" (UID: "ea3b5e66-34bb-401f-bfa1-98bfb6b4b729"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.636622 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.636694 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.636716 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.636781 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.636804 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.636830 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.636857 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns5n4\" (UniqueName: \"kubernetes.io/projected/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729-kube-api-access-ns5n4\") on node \"crc\" DevicePath \"\"" Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.847224 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w44f5"] Feb 16 21:02:43 crc kubenswrapper[4805]: I0216 21:02:43.854233 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w44f5"] Feb 16 21:02:45 crc kubenswrapper[4805]: I0216 21:02:45.612269 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" path="/var/lib/kubelet/pods/ea3b5e66-34bb-401f-bfa1-98bfb6b4b729/volumes" Feb 16 21:02:46 crc kubenswrapper[4805]: I0216 21:02:46.957521 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:46 crc kubenswrapper[4805]: I0216 21:02:46.957599 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:46 crc kubenswrapper[4805]: I0216 21:02:46.964456 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:47 crc kubenswrapper[4805]: I0216 21:02:47.559365 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:02:47 crc kubenswrapper[4805]: I0216 21:02:47.656372 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h2zb9"] Feb 16 21:02:51 crc kubenswrapper[4805]: I0216 21:02:51.684671 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:02:51 crc kubenswrapper[4805]: I0216 21:02:51.692167 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-55c6df6485-gfrcq" Feb 16 21:03:08 crc kubenswrapper[4805]: I0216 21:03:08.100429 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:03:08 crc kubenswrapper[4805]: I0216 21:03:08.101206 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:03:08 crc kubenswrapper[4805]: I0216 21:03:08.101287 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:03:08 crc kubenswrapper[4805]: I0216 21:03:08.102644 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d5aa7da8c088ddcac44286336170e6647dae110a6c4f871ef29f7ab0795c9ec"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:03:08 crc kubenswrapper[4805]: I0216 21:03:08.102784 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://5d5aa7da8c088ddcac44286336170e6647dae110a6c4f871ef29f7ab0795c9ec" gracePeriod=600 Feb 16 21:03:08 crc kubenswrapper[4805]: I0216 21:03:08.725952 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="5d5aa7da8c088ddcac44286336170e6647dae110a6c4f871ef29f7ab0795c9ec" exitCode=0 Feb 16 21:03:08 crc kubenswrapper[4805]: I0216 21:03:08.726088 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"5d5aa7da8c088ddcac44286336170e6647dae110a6c4f871ef29f7ab0795c9ec"} Feb 16 21:03:08 crc kubenswrapper[4805]: I0216 21:03:08.726342 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"ceb0b80c1374cd4ccb9dd0d277e234416e87b41fce8125fe9f568455202c275d"} Feb 16 21:03:08 crc kubenswrapper[4805]: I0216 21:03:08.726369 4805 scope.go:117] "RemoveContainer" containerID="c5973f0774e3be54007771ad0abcf8e61a490f619b2e1c7e7c9a4b4587a84794" Feb 16 21:03:12 crc kubenswrapper[4805]: I0216 21:03:12.672980 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:03:12 crc kubenswrapper[4805]: I0216 21:03:12.713803 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:03:12 crc kubenswrapper[4805]: I0216 21:03:12.716528 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-h2zb9" podUID="2530eb64-2099-45e0-9727-ea9987f22ed5" containerName="console" containerID="cri-o://3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a" gracePeriod=15 Feb 16 21:03:12 crc kubenswrapper[4805]: I0216 21:03:12.794139 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.128081 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h2zb9_2530eb64-2099-45e0-9727-ea9987f22ed5/console/0.log" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.128396 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.215102 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-console-config\") pod \"2530eb64-2099-45e0-9727-ea9987f22ed5\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.215163 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-trusted-ca-bundle\") pod \"2530eb64-2099-45e0-9727-ea9987f22ed5\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.215193 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-oauth-serving-cert\") pod \"2530eb64-2099-45e0-9727-ea9987f22ed5\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.215220 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-serving-cert\") pod \"2530eb64-2099-45e0-9727-ea9987f22ed5\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.215254 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dvhz\" (UniqueName: \"kubernetes.io/projected/2530eb64-2099-45e0-9727-ea9987f22ed5-kube-api-access-7dvhz\") pod \"2530eb64-2099-45e0-9727-ea9987f22ed5\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.215290 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-oauth-config\") pod \"2530eb64-2099-45e0-9727-ea9987f22ed5\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.215323 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-service-ca\") pod \"2530eb64-2099-45e0-9727-ea9987f22ed5\" (UID: \"2530eb64-2099-45e0-9727-ea9987f22ed5\") " Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.216138 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-console-config" (OuterVolumeSpecName: "console-config") pod "2530eb64-2099-45e0-9727-ea9987f22ed5" (UID: "2530eb64-2099-45e0-9727-ea9987f22ed5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.216178 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2530eb64-2099-45e0-9727-ea9987f22ed5" (UID: "2530eb64-2099-45e0-9727-ea9987f22ed5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.216204 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-service-ca" (OuterVolumeSpecName: "service-ca") pod "2530eb64-2099-45e0-9727-ea9987f22ed5" (UID: "2530eb64-2099-45e0-9727-ea9987f22ed5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.216151 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2530eb64-2099-45e0-9727-ea9987f22ed5" (UID: "2530eb64-2099-45e0-9727-ea9987f22ed5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.223917 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2530eb64-2099-45e0-9727-ea9987f22ed5" (UID: "2530eb64-2099-45e0-9727-ea9987f22ed5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.224258 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2530eb64-2099-45e0-9727-ea9987f22ed5" (UID: "2530eb64-2099-45e0-9727-ea9987f22ed5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.224770 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2530eb64-2099-45e0-9727-ea9987f22ed5-kube-api-access-7dvhz" (OuterVolumeSpecName: "kube-api-access-7dvhz") pod "2530eb64-2099-45e0-9727-ea9987f22ed5" (UID: "2530eb64-2099-45e0-9727-ea9987f22ed5"). InnerVolumeSpecName "kube-api-access-7dvhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.316523 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.316717 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.316790 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.316868 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.316948 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dvhz\" (UniqueName: \"kubernetes.io/projected/2530eb64-2099-45e0-9727-ea9987f22ed5-kube-api-access-7dvhz\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.317012 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2530eb64-2099-45e0-9727-ea9987f22ed5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.317069 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2530eb64-2099-45e0-9727-ea9987f22ed5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.770844 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h2zb9_2530eb64-2099-45e0-9727-ea9987f22ed5/console/0.log" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.770918 4805 generic.go:334] "Generic (PLEG): container finished" podID="2530eb64-2099-45e0-9727-ea9987f22ed5" containerID="3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a" exitCode=2 Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.772268 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h2zb9" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.772236 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h2zb9" event={"ID":"2530eb64-2099-45e0-9727-ea9987f22ed5","Type":"ContainerDied","Data":"3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a"} Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.772702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h2zb9" event={"ID":"2530eb64-2099-45e0-9727-ea9987f22ed5","Type":"ContainerDied","Data":"8b0e680a639c58bcd5a1da563e304dc420950f6a802277531e8aad68e3eaa86a"} Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.772776 4805 scope.go:117] "RemoveContainer" containerID="3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.806190 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h2zb9"] Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.813525 4805 scope.go:117] "RemoveContainer" containerID="3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a" Feb 16 21:03:13 crc kubenswrapper[4805]: E0216 21:03:13.814380 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a\": container with ID starting with 3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a not found: ID does not exist" containerID="3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.814579 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a"} err="failed to get container status \"3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a\": rpc error: code = NotFound desc = could not find container \"3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a\": container with ID starting with 3c32f9cd6904156cc95cf329dbb085103861ea4d003e18299db140591af5ef2a not found: ID does not exist" Feb 16 21:03:13 crc kubenswrapper[4805]: I0216 21:03:13.815481 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-h2zb9"] Feb 16 21:03:15 crc kubenswrapper[4805]: I0216 21:03:15.612182 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2530eb64-2099-45e0-9727-ea9987f22ed5" path="/var/lib/kubelet/pods/2530eb64-2099-45e0-9727-ea9987f22ed5/volumes" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.584446 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79f7c68f86-9bv6w"] Feb 16 21:03:55 crc kubenswrapper[4805]: E0216 21:03:55.585246 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" containerName="registry" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.585260 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" containerName="registry" Feb 16 21:03:55 crc kubenswrapper[4805]: E0216 21:03:55.585283 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2530eb64-2099-45e0-9727-ea9987f22ed5" containerName="console" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.585289 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2530eb64-2099-45e0-9727-ea9987f22ed5" containerName="console" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.585413 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2530eb64-2099-45e0-9727-ea9987f22ed5" containerName="console" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.585434 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3b5e66-34bb-401f-bfa1-98bfb6b4b729" containerName="registry" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.585970 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.612547 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79f7c68f86-9bv6w"] Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.642741 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-service-ca\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.642867 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-console-config\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.642910 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-serving-cert\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.643018 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-oauth-config\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.643079 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk9zt\" (UniqueName: \"kubernetes.io/projected/f054a68c-ddb3-440e-87c2-dc1444078331-kube-api-access-dk9zt\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.643111 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-oauth-serving-cert\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.643181 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-trusted-ca-bundle\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.744945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-service-ca\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.745069 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-console-config\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.745124 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-serving-cert\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.745191 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-oauth-config\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.745232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk9zt\" (UniqueName: \"kubernetes.io/projected/f054a68c-ddb3-440e-87c2-dc1444078331-kube-api-access-dk9zt\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.745271 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-oauth-serving-cert\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.745354 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-trusted-ca-bundle\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.747221 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-oauth-serving-cert\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.747340 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-console-config\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.747591 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-trusted-ca-bundle\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.747826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-service-ca\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.751578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-oauth-config\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.751629 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-serving-cert\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.768373 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk9zt\" (UniqueName: \"kubernetes.io/projected/f054a68c-ddb3-440e-87c2-dc1444078331-kube-api-access-dk9zt\") pod \"console-79f7c68f86-9bv6w\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:55 crc kubenswrapper[4805]: I0216 21:03:55.912417 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:03:56 crc kubenswrapper[4805]: I0216 21:03:56.203126 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79f7c68f86-9bv6w"] Feb 16 21:03:57 crc kubenswrapper[4805]: I0216 21:03:57.113615 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79f7c68f86-9bv6w" event={"ID":"f054a68c-ddb3-440e-87c2-dc1444078331","Type":"ContainerStarted","Data":"6db40e61404d6a64b3964d7809ccc9ccf4d25aa188ffae9a32a88853ce0b9f1d"} Feb 16 21:03:57 crc kubenswrapper[4805]: I0216 21:03:57.113993 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79f7c68f86-9bv6w" event={"ID":"f054a68c-ddb3-440e-87c2-dc1444078331","Type":"ContainerStarted","Data":"e26b5445db7af0bda1f51a15f6a1ee03128b1fb7a49cd5605179b8c9ae9d8105"} Feb 16 21:03:57 crc kubenswrapper[4805]: I0216 21:03:57.147968 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79f7c68f86-9bv6w" podStartSLOduration=2.147939092 podStartE2EDuration="2.147939092s" podCreationTimestamp="2026-02-16 21:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:03:57.142874597 +0000 UTC m=+454.961557972" watchObservedRunningTime="2026-02-16 21:03:57.147939092 +0000 UTC m=+454.966622427" Feb 16 21:04:05 crc kubenswrapper[4805]: I0216 21:04:05.913516 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:04:05 crc kubenswrapper[4805]: I0216 21:04:05.914262 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:04:05 crc kubenswrapper[4805]: I0216 21:04:05.920882 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:04:06 crc kubenswrapper[4805]: I0216 21:04:06.198655 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:04:06 crc kubenswrapper[4805]: I0216 21:04:06.281662 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c88b4c759-5kqtq"] Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.330559 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5c88b4c759-5kqtq" podUID="5016a697-f4a8-4d27-a16a-3e91a257fc94" containerName="console" containerID="cri-o://9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6" gracePeriod=15 Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.873551 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c88b4c759-5kqtq_5016a697-f4a8-4d27-a16a-3e91a257fc94/console/0.log" Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.873822 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.988852 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-oauth-serving-cert\") pod \"5016a697-f4a8-4d27-a16a-3e91a257fc94\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.988970 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-trusted-ca-bundle\") pod \"5016a697-f4a8-4d27-a16a-3e91a257fc94\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.989007 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-service-ca\") pod \"5016a697-f4a8-4d27-a16a-3e91a257fc94\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.989072 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-serving-cert\") pod \"5016a697-f4a8-4d27-a16a-3e91a257fc94\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.989129 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-oauth-config\") pod \"5016a697-f4a8-4d27-a16a-3e91a257fc94\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.989178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cvdh\" (UniqueName: \"kubernetes.io/projected/5016a697-f4a8-4d27-a16a-3e91a257fc94-kube-api-access-2cvdh\") pod \"5016a697-f4a8-4d27-a16a-3e91a257fc94\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.989261 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-config\") pod \"5016a697-f4a8-4d27-a16a-3e91a257fc94\" (UID: \"5016a697-f4a8-4d27-a16a-3e91a257fc94\") " Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.990314 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-service-ca" (OuterVolumeSpecName: "service-ca") pod "5016a697-f4a8-4d27-a16a-3e91a257fc94" (UID: "5016a697-f4a8-4d27-a16a-3e91a257fc94"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.990350 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-config" (OuterVolumeSpecName: "console-config") pod "5016a697-f4a8-4d27-a16a-3e91a257fc94" (UID: "5016a697-f4a8-4d27-a16a-3e91a257fc94"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.990380 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5016a697-f4a8-4d27-a16a-3e91a257fc94" (UID: "5016a697-f4a8-4d27-a16a-3e91a257fc94"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.990706 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5016a697-f4a8-4d27-a16a-3e91a257fc94" (UID: "5016a697-f4a8-4d27-a16a-3e91a257fc94"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.994956 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5016a697-f4a8-4d27-a16a-3e91a257fc94" (UID: "5016a697-f4a8-4d27-a16a-3e91a257fc94"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.994999 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5016a697-f4a8-4d27-a16a-3e91a257fc94" (UID: "5016a697-f4a8-4d27-a16a-3e91a257fc94"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:04:31 crc kubenswrapper[4805]: I0216 21:04:31.996078 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5016a697-f4a8-4d27-a16a-3e91a257fc94-kube-api-access-2cvdh" (OuterVolumeSpecName: "kube-api-access-2cvdh") pod "5016a697-f4a8-4d27-a16a-3e91a257fc94" (UID: "5016a697-f4a8-4d27-a16a-3e91a257fc94"). InnerVolumeSpecName "kube-api-access-2cvdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.090631 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.090677 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.090695 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.090738 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5016a697-f4a8-4d27-a16a-3e91a257fc94-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.090757 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.090778 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5016a697-f4a8-4d27-a16a-3e91a257fc94-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.090795 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cvdh\" (UniqueName: \"kubernetes.io/projected/5016a697-f4a8-4d27-a16a-3e91a257fc94-kube-api-access-2cvdh\") on node \"crc\" DevicePath \"\"" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.395550 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c88b4c759-5kqtq_5016a697-f4a8-4d27-a16a-3e91a257fc94/console/0.log" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.395646 4805 generic.go:334] "Generic (PLEG): container finished" podID="5016a697-f4a8-4d27-a16a-3e91a257fc94" containerID="9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6" exitCode=2 Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.395694 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c88b4c759-5kqtq" event={"ID":"5016a697-f4a8-4d27-a16a-3e91a257fc94","Type":"ContainerDied","Data":"9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6"} Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.395781 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c88b4c759-5kqtq" event={"ID":"5016a697-f4a8-4d27-a16a-3e91a257fc94","Type":"ContainerDied","Data":"d43d11a8f00fc5505fe187d8bc16ab4a3582a4597f257349d30d761c543a0e8d"} Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.395791 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c88b4c759-5kqtq" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.395811 4805 scope.go:117] "RemoveContainer" containerID="9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.426381 4805 scope.go:117] "RemoveContainer" containerID="9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6" Feb 16 21:04:32 crc kubenswrapper[4805]: E0216 21:04:32.426938 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6\": container with ID starting with 9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6 not found: ID does not exist" containerID="9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.427011 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6"} err="failed to get container status \"9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6\": rpc error: code = NotFound desc = could not find container \"9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6\": container with ID starting with 9fc41676a8d07c92859623204813b066c00775cf844b19a3cf6c0b0c8f689ba6 not found: ID does not exist" Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.444853 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c88b4c759-5kqtq"] Feb 16 21:04:32 crc kubenswrapper[4805]: I0216 21:04:32.449464 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5c88b4c759-5kqtq"] Feb 16 21:04:33 crc kubenswrapper[4805]: I0216 21:04:33.609096 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5016a697-f4a8-4d27-a16a-3e91a257fc94" path="/var/lib/kubelet/pods/5016a697-f4a8-4d27-a16a-3e91a257fc94/volumes" Feb 16 21:05:05 crc kubenswrapper[4805]: I0216 21:05:05.838188 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8"] Feb 16 21:05:05 crc kubenswrapper[4805]: E0216 21:05:05.839102 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5016a697-f4a8-4d27-a16a-3e91a257fc94" containerName="console" Feb 16 21:05:05 crc kubenswrapper[4805]: I0216 21:05:05.839123 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5016a697-f4a8-4d27-a16a-3e91a257fc94" containerName="console" Feb 16 21:05:05 crc kubenswrapper[4805]: I0216 21:05:05.839334 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5016a697-f4a8-4d27-a16a-3e91a257fc94" containerName="console" Feb 16 21:05:05 crc kubenswrapper[4805]: I0216 21:05:05.840685 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:05 crc kubenswrapper[4805]: I0216 21:05:05.843460 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:05:05 crc kubenswrapper[4805]: I0216 21:05:05.860280 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8"] Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.026229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.026298 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.026381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fw4m\" (UniqueName: \"kubernetes.io/projected/3a65bf60-dac3-485c-83ed-cd7900050692-kube-api-access-4fw4m\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.127417 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.127496 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fw4m\" (UniqueName: \"kubernetes.io/projected/3a65bf60-dac3-485c-83ed-cd7900050692-kube-api-access-4fw4m\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.127578 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.128003 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.128044 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.161190 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fw4m\" (UniqueName: \"kubernetes.io/projected/3a65bf60-dac3-485c-83ed-cd7900050692-kube-api-access-4fw4m\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.170206 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.415397 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8"] Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.666065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" event={"ID":"3a65bf60-dac3-485c-83ed-cd7900050692","Type":"ContainerStarted","Data":"acc5e6584895ec271adcc6b33fe7e2fad3df9c4db6df3ba59619b2e2ac29fb25"} Feb 16 21:05:06 crc kubenswrapper[4805]: I0216 21:05:06.666117 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" event={"ID":"3a65bf60-dac3-485c-83ed-cd7900050692","Type":"ContainerStarted","Data":"65c66d01ec50b7ad9511f7de65f66ec0e5ab97759adea4739a96325ed542fa9d"} Feb 16 21:05:07 crc kubenswrapper[4805]: I0216 21:05:07.675058 4805 generic.go:334] "Generic (PLEG): container finished" podID="3a65bf60-dac3-485c-83ed-cd7900050692" containerID="acc5e6584895ec271adcc6b33fe7e2fad3df9c4db6df3ba59619b2e2ac29fb25" exitCode=0 Feb 16 21:05:07 crc kubenswrapper[4805]: I0216 21:05:07.675342 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" event={"ID":"3a65bf60-dac3-485c-83ed-cd7900050692","Type":"ContainerDied","Data":"acc5e6584895ec271adcc6b33fe7e2fad3df9c4db6df3ba59619b2e2ac29fb25"} Feb 16 21:05:07 crc kubenswrapper[4805]: I0216 21:05:07.678639 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:05:08 crc kubenswrapper[4805]: I0216 21:05:08.100243 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:05:08 crc kubenswrapper[4805]: I0216 21:05:08.100363 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:05:09 crc kubenswrapper[4805]: I0216 21:05:09.691973 4805 generic.go:334] "Generic (PLEG): container finished" podID="3a65bf60-dac3-485c-83ed-cd7900050692" containerID="2b505210ca73afa57e3450bcd558a19c3fca6b855160cc4a9b8e76a2011ca0b9" exitCode=0 Feb 16 21:05:09 crc kubenswrapper[4805]: I0216 21:05:09.692373 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" event={"ID":"3a65bf60-dac3-485c-83ed-cd7900050692","Type":"ContainerDied","Data":"2b505210ca73afa57e3450bcd558a19c3fca6b855160cc4a9b8e76a2011ca0b9"} Feb 16 21:05:10 crc kubenswrapper[4805]: I0216 21:05:10.704689 4805 generic.go:334] "Generic (PLEG): container finished" podID="3a65bf60-dac3-485c-83ed-cd7900050692" containerID="25ce8a1dc318442c4b9ccef3880c5b466115343c7d4a709b9dc7ecb087bed0cc" exitCode=0 Feb 16 21:05:10 crc kubenswrapper[4805]: I0216 21:05:10.704785 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" event={"ID":"3a65bf60-dac3-485c-83ed-cd7900050692","Type":"ContainerDied","Data":"25ce8a1dc318442c4b9ccef3880c5b466115343c7d4a709b9dc7ecb087bed0cc"} Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.077166 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.144006 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-bundle\") pod \"3a65bf60-dac3-485c-83ed-cd7900050692\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.144108 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-util\") pod \"3a65bf60-dac3-485c-83ed-cd7900050692\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.144179 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fw4m\" (UniqueName: \"kubernetes.io/projected/3a65bf60-dac3-485c-83ed-cd7900050692-kube-api-access-4fw4m\") pod \"3a65bf60-dac3-485c-83ed-cd7900050692\" (UID: \"3a65bf60-dac3-485c-83ed-cd7900050692\") " Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.147023 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-bundle" (OuterVolumeSpecName: "bundle") pod "3a65bf60-dac3-485c-83ed-cd7900050692" (UID: "3a65bf60-dac3-485c-83ed-cd7900050692"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.151304 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a65bf60-dac3-485c-83ed-cd7900050692-kube-api-access-4fw4m" (OuterVolumeSpecName: "kube-api-access-4fw4m") pod "3a65bf60-dac3-485c-83ed-cd7900050692" (UID: "3a65bf60-dac3-485c-83ed-cd7900050692"). InnerVolumeSpecName "kube-api-access-4fw4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.160203 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-util" (OuterVolumeSpecName: "util") pod "3a65bf60-dac3-485c-83ed-cd7900050692" (UID: "3a65bf60-dac3-485c-83ed-cd7900050692"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.246313 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.246362 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a65bf60-dac3-485c-83ed-cd7900050692-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.246383 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fw4m\" (UniqueName: \"kubernetes.io/projected/3a65bf60-dac3-485c-83ed-cd7900050692-kube-api-access-4fw4m\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.724225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" event={"ID":"3a65bf60-dac3-485c-83ed-cd7900050692","Type":"ContainerDied","Data":"65c66d01ec50b7ad9511f7de65f66ec0e5ab97759adea4739a96325ed542fa9d"} Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.724591 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65c66d01ec50b7ad9511f7de65f66ec0e5ab97759adea4739a96325ed542fa9d" Feb 16 21:05:12 crc kubenswrapper[4805]: I0216 21:05:12.724357 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8" Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.128834 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-crk96"] Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.132281 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovn-controller" containerID="cri-o://8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5" gracePeriod=30 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.132454 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="nbdb" containerID="cri-o://815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325" gracePeriod=30 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.132508 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="northd" containerID="cri-o://a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba" gracePeriod=30 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.132376 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="sbdb" containerID="cri-o://1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40" gracePeriod=30 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.132465 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovn-acl-logging" containerID="cri-o://cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42" gracePeriod=30 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.132457 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kube-rbac-proxy-node" containerID="cri-o://cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68" gracePeriod=30 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.132420 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c" gracePeriod=30 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.183386 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" containerID="cri-o://b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1" gracePeriod=30 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.779351 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/2.log" Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.780705 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/1.log" Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.780831 4805 generic.go:334] "Generic (PLEG): container finished" podID="7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2" containerID="525eb5bad3094f13416cbe9634fedc7514417458399df4d37ede4cc0a0909ad2" exitCode=2 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.780979 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qwfz" event={"ID":"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2","Type":"ContainerDied","Data":"525eb5bad3094f13416cbe9634fedc7514417458399df4d37ede4cc0a0909ad2"} Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.781063 4805 scope.go:117] "RemoveContainer" containerID="ea8c8b685bbca66fac721a2c3c80ff4c17b5859b48343d042355852f73b8fc36" Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.782536 4805 scope.go:117] "RemoveContainer" containerID="525eb5bad3094f13416cbe9634fedc7514417458399df4d37ede4cc0a0909ad2" Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.795583 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovnkube-controller/3.log" Feb 16 21:05:17 crc kubenswrapper[4805]: E0216 21:05:17.798135 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8qwfz_openshift-multus(7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2)\"" pod="openshift-multus/multus-8qwfz" podUID="7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2" Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.806255 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovn-acl-logging/0.log" Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.807575 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovn-controller/0.log" Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.810454 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1" exitCode=0 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.810517 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40" exitCode=0 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.810596 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1"} Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.810654 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40"} Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.810683 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325"} Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.810819 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325" exitCode=0 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.811819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba"} Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.811848 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba" exitCode=0 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.811873 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42" exitCode=143 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.811883 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5" exitCode=143 Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.811910 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42"} Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.811925 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5"} Feb 16 21:05:17 crc kubenswrapper[4805]: I0216 21:05:17.845416 4805 scope.go:117] "RemoveContainer" containerID="344a737a9be24ce302e5cc1aa62e150fe42050649b343d58c9410fc3653da229" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.332815 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovn-acl-logging/0.log" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.333209 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovn-controller/0.log" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.333557 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349045 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-var-lib-openvswitch\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349086 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-log-socket\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349112 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349140 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8719b45e-eed5-4265-87de-46967022148f-ovn-node-metrics-cert\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349165 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-ovn\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349186 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-netd\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349180 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-openvswitch\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349232 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-config\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349243 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349250 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-env-overrides\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349264 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349301 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-slash\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349326 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-script-lib\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349343 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-node-log\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349344 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-slash" (OuterVolumeSpecName: "host-slash") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349374 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349376 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6stvx\" (UniqueName: \"kubernetes.io/projected/8719b45e-eed5-4265-87de-46967022148f-kube-api-access-6stvx\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349427 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-ovn-kubernetes\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349477 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-systemd-units\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349498 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-netns\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349528 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-kubelet\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349498 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349540 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349566 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-bin\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349585 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349605 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-etc-openvswitch\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349608 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349629 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-systemd\") pod \"8719b45e-eed5-4265-87de-46967022148f\" (UID: \"8719b45e-eed5-4265-87de-46967022148f\") " Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349648 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349666 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349773 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349797 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-node-log" (OuterVolumeSpecName: "node-log") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349796 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-log-socket" (OuterVolumeSpecName: "log-socket") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349812 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349965 4805 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349979 4805 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349988 4805 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.349998 4805 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350006 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350015 4805 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350022 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350029 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350037 4805 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350036 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350045 4805 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350070 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350089 4805 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350098 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350109 4805 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350118 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.350127 4805 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.353872 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8719b45e-eed5-4265-87de-46967022148f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.366633 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8719b45e-eed5-4265-87de-46967022148f-kube-api-access-6stvx" (OuterVolumeSpecName: "kube-api-access-6stvx") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "kube-api-access-6stvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.380228 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8719b45e-eed5-4265-87de-46967022148f" (UID: "8719b45e-eed5-4265-87de-46967022148f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.408714 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mgwvp"] Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.408962 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="sbdb" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.408981 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="sbdb" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.408991 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.408999 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409006 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="northd" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409012 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="northd" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409020 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409026 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409035 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kubecfg-setup" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409041 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kubecfg-setup" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409051 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409057 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409063 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409070 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409078 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a65bf60-dac3-485c-83ed-cd7900050692" containerName="pull" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409084 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a65bf60-dac3-485c-83ed-cd7900050692" containerName="pull" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409095 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovn-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409100 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovn-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409108 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a65bf60-dac3-485c-83ed-cd7900050692" containerName="extract" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409114 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a65bf60-dac3-485c-83ed-cd7900050692" containerName="extract" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409124 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kube-rbac-proxy-node" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409130 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kube-rbac-proxy-node" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409139 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="nbdb" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409144 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="nbdb" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409149 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409155 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409166 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a65bf60-dac3-485c-83ed-cd7900050692" containerName="util" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409172 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a65bf60-dac3-485c-83ed-cd7900050692" containerName="util" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409180 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovn-acl-logging" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409187 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovn-acl-logging" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409300 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a65bf60-dac3-485c-83ed-cd7900050692" containerName="extract" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409310 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kube-rbac-proxy-node" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409320 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409326 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409333 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovn-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409338 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="sbdb" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409345 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409352 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="northd" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409360 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovn-acl-logging" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409366 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409374 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409380 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409391 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="nbdb" Feb 16 21:05:18 crc kubenswrapper[4805]: E0216 21:05:18.409515 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.409526 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719b45e-eed5-4265-87de-46967022148f" containerName="ovnkube-controller" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.411887 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.450963 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-ovnkube-script-lib\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451005 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-slash\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451021 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-node-log\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451037 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-env-overrides\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451053 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-run-systemd\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-run-netns\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451085 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-cni-bin\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451100 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt66t\" (UniqueName: \"kubernetes.io/projected/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-kube-api-access-vt66t\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-ovnkube-config\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451129 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-var-lib-openvswitch\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451144 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-run-ovn\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451164 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-systemd-units\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451183 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451199 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-ovn-node-metrics-cert\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451228 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-run-openvswitch\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451245 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-etc-openvswitch\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451274 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451293 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-cni-netd\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451314 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-kubelet\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451330 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-log-socket\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451360 4805 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8719b45e-eed5-4265-87de-46967022148f-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451370 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8719b45e-eed5-4265-87de-46967022148f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451379 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8719b45e-eed5-4265-87de-46967022148f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.451387 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6stvx\" (UniqueName: \"kubernetes.io/projected/8719b45e-eed5-4265-87de-46967022148f-kube-api-access-6stvx\") on node \"crc\" DevicePath \"\"" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552184 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-cni-netd\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-kubelet\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552225 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-log-socket\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552249 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-ovnkube-script-lib\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-slash\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-node-log\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-env-overrides\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-run-systemd\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552343 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-cni-bin\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552363 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-run-netns\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552383 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt66t\" (UniqueName: \"kubernetes.io/projected/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-kube-api-access-vt66t\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552401 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-ovnkube-config\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552417 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-var-lib-openvswitch\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-run-ovn\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552453 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-systemd-units\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552511 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-ovn-node-metrics-cert\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552534 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-run-openvswitch\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-etc-openvswitch\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552612 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-etc-openvswitch\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552645 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552669 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-cni-netd\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552688 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-kubelet\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.552707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-log-socket\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.553344 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-ovnkube-script-lib\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.553392 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-slash\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.553413 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-node-log\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.553697 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-env-overrides\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.553752 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-run-systemd\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.553773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-cni-bin\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.553791 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-run-netns\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.554431 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-ovnkube-config\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.554472 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-var-lib-openvswitch\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.554495 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-run-ovn\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.554514 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-systemd-units\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.554534 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.555107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-run-openvswitch\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.558533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-ovn-node-metrics-cert\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.572773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt66t\" (UniqueName: \"kubernetes.io/projected/3b847ea7-fefb-4e18-9cd5-3218724a7f3b-kube-api-access-vt66t\") pod \"ovnkube-node-mgwvp\" (UID: \"3b847ea7-fefb-4e18-9cd5-3218724a7f3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.725899 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.818675 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/2.log" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.821982 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovn-acl-logging/0.log" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.822454 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-crk96_8719b45e-eed5-4265-87de-46967022148f/ovn-controller/0.log" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.822789 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c" exitCode=0 Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.822816 4805 generic.go:334] "Generic (PLEG): container finished" podID="8719b45e-eed5-4265-87de-46967022148f" containerID="cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68" exitCode=0 Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.822864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c"} Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.822893 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68"} Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.822908 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" event={"ID":"8719b45e-eed5-4265-87de-46967022148f","Type":"ContainerDied","Data":"dacdb696bc440d7cb506367c083fc5b450da326cd9c506b87c1c65ba60abb8ad"} Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.822934 4805 scope.go:117] "RemoveContainer" containerID="b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.823086 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-crk96" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.827943 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerStarted","Data":"b0c30bf19b9b8b22bafa094604fd2765481f6056abfb41dbe3996c86f599cda7"} Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.844135 4805 scope.go:117] "RemoveContainer" containerID="1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.858002 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-crk96"] Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.860994 4805 scope.go:117] "RemoveContainer" containerID="815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.861150 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-crk96"] Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.880850 4805 scope.go:117] "RemoveContainer" containerID="a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.911236 4805 scope.go:117] "RemoveContainer" containerID="5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.957433 4805 scope.go:117] "RemoveContainer" containerID="cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.974707 4805 scope.go:117] "RemoveContainer" containerID="cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42" Feb 16 21:05:18 crc kubenswrapper[4805]: I0216 21:05:18.989559 4805 scope.go:117] "RemoveContainer" containerID="8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.006615 4805 scope.go:117] "RemoveContainer" containerID="d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.020654 4805 scope.go:117] "RemoveContainer" containerID="b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1" Feb 16 21:05:19 crc kubenswrapper[4805]: E0216 21:05:19.021156 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1\": container with ID starting with b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1 not found: ID does not exist" containerID="b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.021220 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1"} err="failed to get container status \"b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1\": rpc error: code = NotFound desc = could not find container \"b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1\": container with ID starting with b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.021276 4805 scope.go:117] "RemoveContainer" containerID="1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40" Feb 16 21:05:19 crc kubenswrapper[4805]: E0216 21:05:19.021918 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\": container with ID starting with 1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40 not found: ID does not exist" containerID="1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.021959 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40"} err="failed to get container status \"1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\": rpc error: code = NotFound desc = could not find container \"1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\": container with ID starting with 1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.021989 4805 scope.go:117] "RemoveContainer" containerID="815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325" Feb 16 21:05:19 crc kubenswrapper[4805]: E0216 21:05:19.022348 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\": container with ID starting with 815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325 not found: ID does not exist" containerID="815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.022383 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325"} err="failed to get container status \"815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\": rpc error: code = NotFound desc = could not find container \"815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\": container with ID starting with 815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.022407 4805 scope.go:117] "RemoveContainer" containerID="a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba" Feb 16 21:05:19 crc kubenswrapper[4805]: E0216 21:05:19.022778 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\": container with ID starting with a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba not found: ID does not exist" containerID="a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.022797 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba"} err="failed to get container status \"a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\": rpc error: code = NotFound desc = could not find container \"a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\": container with ID starting with a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.022810 4805 scope.go:117] "RemoveContainer" containerID="5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c" Feb 16 21:05:19 crc kubenswrapper[4805]: E0216 21:05:19.023037 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\": container with ID starting with 5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c not found: ID does not exist" containerID="5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.023059 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c"} err="failed to get container status \"5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\": rpc error: code = NotFound desc = could not find container \"5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\": container with ID starting with 5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.023074 4805 scope.go:117] "RemoveContainer" containerID="cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68" Feb 16 21:05:19 crc kubenswrapper[4805]: E0216 21:05:19.023406 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\": container with ID starting with cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68 not found: ID does not exist" containerID="cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.023432 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68"} err="failed to get container status \"cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\": rpc error: code = NotFound desc = could not find container \"cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\": container with ID starting with cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.023448 4805 scope.go:117] "RemoveContainer" containerID="cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42" Feb 16 21:05:19 crc kubenswrapper[4805]: E0216 21:05:19.023761 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\": container with ID starting with cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42 not found: ID does not exist" containerID="cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.023803 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42"} err="failed to get container status \"cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\": rpc error: code = NotFound desc = could not find container \"cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\": container with ID starting with cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.023820 4805 scope.go:117] "RemoveContainer" containerID="8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5" Feb 16 21:05:19 crc kubenswrapper[4805]: E0216 21:05:19.024109 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\": container with ID starting with 8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5 not found: ID does not exist" containerID="8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.024144 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5"} err="failed to get container status \"8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\": rpc error: code = NotFound desc = could not find container \"8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\": container with ID starting with 8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.024165 4805 scope.go:117] "RemoveContainer" containerID="d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957" Feb 16 21:05:19 crc kubenswrapper[4805]: E0216 21:05:19.024407 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\": container with ID starting with d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957 not found: ID does not exist" containerID="d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.024433 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957"} err="failed to get container status \"d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\": rpc error: code = NotFound desc = could not find container \"d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\": container with ID starting with d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.024449 4805 scope.go:117] "RemoveContainer" containerID="b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.024765 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1"} err="failed to get container status \"b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1\": rpc error: code = NotFound desc = could not find container \"b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1\": container with ID starting with b336cbb67a18b3137cc83f834c4dc3b5c39702e11a5f35b0c37234fe940b00d1 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.024790 4805 scope.go:117] "RemoveContainer" containerID="1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.025047 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40"} err="failed to get container status \"1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\": rpc error: code = NotFound desc = could not find container \"1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40\": container with ID starting with 1feb97aff615bedb90b8207d3350f9dca93c46a40e404fc4236e073aa35e1a40 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.025068 4805 scope.go:117] "RemoveContainer" containerID="815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.025306 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325"} err="failed to get container status \"815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\": rpc error: code = NotFound desc = could not find container \"815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325\": container with ID starting with 815f94a8a4b24d9297c31022494aa8eaa3ce4672fe5be22e1f58339ba84d7325 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.025351 4805 scope.go:117] "RemoveContainer" containerID="a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.025775 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba"} err="failed to get container status \"a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\": rpc error: code = NotFound desc = could not find container \"a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba\": container with ID starting with a829ea39827dc85bd13a472b668040b0c07f8583257e3f44d93dda9f872406ba not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.025799 4805 scope.go:117] "RemoveContainer" containerID="5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.026023 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c"} err="failed to get container status \"5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\": rpc error: code = NotFound desc = could not find container \"5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c\": container with ID starting with 5fa94569d55c7eaf9f66f229b4f44421939bad2af9bc197ac410ca754a3ea45c not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.026040 4805 scope.go:117] "RemoveContainer" containerID="cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.026341 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68"} err="failed to get container status \"cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\": rpc error: code = NotFound desc = could not find container \"cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68\": container with ID starting with cd0bd95c0dbb57391a0d88183a169dbcb3da4190f455f2087dd0d5663b24ca68 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.026367 4805 scope.go:117] "RemoveContainer" containerID="cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.026597 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42"} err="failed to get container status \"cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\": rpc error: code = NotFound desc = could not find container \"cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42\": container with ID starting with cfdf88cf4a90f5fa6ce5d092a27b320496a8547c462a1e78902dc18703934a42 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.026622 4805 scope.go:117] "RemoveContainer" containerID="8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.026940 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5"} err="failed to get container status \"8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\": rpc error: code = NotFound desc = could not find container \"8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5\": container with ID starting with 8bf1d1894ebf9726e36c55d6ccafcd017dbc40fbdfbc1ca4999fec656d90b1e5 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.026961 4805 scope.go:117] "RemoveContainer" containerID="d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.027209 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957"} err="failed to get container status \"d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\": rpc error: code = NotFound desc = could not find container \"d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957\": container with ID starting with d86a1d474e781a0628cae4db77ce439d40797d33a411bb031e16990e6e7da957 not found: ID does not exist" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.604007 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8719b45e-eed5-4265-87de-46967022148f" path="/var/lib/kubelet/pods/8719b45e-eed5-4265-87de-46967022148f/volumes" Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.837295 4805 generic.go:334] "Generic (PLEG): container finished" podID="3b847ea7-fefb-4e18-9cd5-3218724a7f3b" containerID="45c9a500d8d8f8c1ffcd6972ab5bcf27aed9992c01179e28146b65b45f454fca" exitCode=0 Feb 16 21:05:19 crc kubenswrapper[4805]: I0216 21:05:19.837398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerDied","Data":"45c9a500d8d8f8c1ffcd6972ab5bcf27aed9992c01179e28146b65b45f454fca"} Feb 16 21:05:20 crc kubenswrapper[4805]: I0216 21:05:20.847307 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerStarted","Data":"5971ba453ceaee9ddf946afd538f517ec7bc974da74583698c119c5ef697a5d2"} Feb 16 21:05:20 crc kubenswrapper[4805]: I0216 21:05:20.847633 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerStarted","Data":"4e0206f600f76447be1cb71a438dc6f7201ab80b2ab93351f9d69fdccfa76d1e"} Feb 16 21:05:20 crc kubenswrapper[4805]: I0216 21:05:20.847643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerStarted","Data":"a1cebf21a611f048ebc25c581ef589ed5fdc3913f77fa7c3ebcbfcd31be5ce94"} Feb 16 21:05:20 crc kubenswrapper[4805]: I0216 21:05:20.847652 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerStarted","Data":"84d3308a06c205eeda4e139a7839551ec5b7d1164b08713b8b942f62553af4fe"} Feb 16 21:05:20 crc kubenswrapper[4805]: I0216 21:05:20.847662 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerStarted","Data":"7701d178aa34e4ce3a3692ffcea33d0e163620eb4afbd7fb4ea298d80cb466de"} Feb 16 21:05:20 crc kubenswrapper[4805]: I0216 21:05:20.847670 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerStarted","Data":"a4e62a70a07841193c8f210d06db318e66288c25b2a28bdb2f79d3689da727e6"} Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.811816 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4"] Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.813256 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.816284 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.817490 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.818121 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-wwm28" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.824519 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfgj2\" (UniqueName: \"kubernetes.io/projected/79bf21e6-60c9-4788-a02f-8efb828dc8ef-kube-api-access-zfgj2\") pod \"obo-prometheus-operator-68bc856cb9-6g7x4\" (UID: \"79bf21e6-60c9-4788-a02f-8efb828dc8ef\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.866783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerStarted","Data":"65e9244c4969392287fcd9a449ce2a7700eb17a492ae5437a911805a61d8775f"} Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.901538 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv"] Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.902477 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.904577 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-mg42c" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.906015 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.909936 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2"] Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.910873 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.925550 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/111775fe-ccc4-4b93-9fcf-5a9bd115788c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-2gttv\" (UID: \"111775fe-ccc4-4b93-9fcf-5a9bd115788c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.925705 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/111775fe-ccc4-4b93-9fcf-5a9bd115788c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-2gttv\" (UID: \"111775fe-ccc4-4b93-9fcf-5a9bd115788c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.925820 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfgj2\" (UniqueName: \"kubernetes.io/projected/79bf21e6-60c9-4788-a02f-8efb828dc8ef-kube-api-access-zfgj2\") pod \"obo-prometheus-operator-68bc856cb9-6g7x4\" (UID: \"79bf21e6-60c9-4788-a02f-8efb828dc8ef\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.925928 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22284904-7391-4eb6-9ef7-adf068c3d7ec-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2\" (UID: \"22284904-7391-4eb6-9ef7-adf068c3d7ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.926016 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22284904-7391-4eb6-9ef7-adf068c3d7ec-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2\" (UID: \"22284904-7391-4eb6-9ef7-adf068c3d7ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:23 crc kubenswrapper[4805]: I0216 21:05:23.983359 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfgj2\" (UniqueName: \"kubernetes.io/projected/79bf21e6-60c9-4788-a02f-8efb828dc8ef-kube-api-access-zfgj2\") pod \"obo-prometheus-operator-68bc856cb9-6g7x4\" (UID: \"79bf21e6-60c9-4788-a02f-8efb828dc8ef\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.027209 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22284904-7391-4eb6-9ef7-adf068c3d7ec-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2\" (UID: \"22284904-7391-4eb6-9ef7-adf068c3d7ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.027258 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22284904-7391-4eb6-9ef7-adf068c3d7ec-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2\" (UID: \"22284904-7391-4eb6-9ef7-adf068c3d7ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.027302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/111775fe-ccc4-4b93-9fcf-5a9bd115788c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-2gttv\" (UID: \"111775fe-ccc4-4b93-9fcf-5a9bd115788c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.027326 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/111775fe-ccc4-4b93-9fcf-5a9bd115788c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-2gttv\" (UID: \"111775fe-ccc4-4b93-9fcf-5a9bd115788c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.030177 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/111775fe-ccc4-4b93-9fcf-5a9bd115788c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-2gttv\" (UID: \"111775fe-ccc4-4b93-9fcf-5a9bd115788c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.030298 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/111775fe-ccc4-4b93-9fcf-5a9bd115788c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-2gttv\" (UID: \"111775fe-ccc4-4b93-9fcf-5a9bd115788c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.035259 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22284904-7391-4eb6-9ef7-adf068c3d7ec-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2\" (UID: \"22284904-7391-4eb6-9ef7-adf068c3d7ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.048610 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22284904-7391-4eb6-9ef7-adf068c3d7ec-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2\" (UID: \"22284904-7391-4eb6-9ef7-adf068c3d7ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.059434 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4q24b"] Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.060325 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.063358 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-vp48m" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.063387 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.128193 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.128530 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vc7r\" (UniqueName: \"kubernetes.io/projected/6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8-kube-api-access-7vc7r\") pod \"observability-operator-59bdc8b94-4q24b\" (UID: \"6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8\") " pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.128851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4q24b\" (UID: \"6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8\") " pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.156984 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(9be73bef5641caf372f32472b640b4e1b333b1f00f320e2fe3f67e8f77dbb4bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.157049 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(9be73bef5641caf372f32472b640b4e1b333b1f00f320e2fe3f67e8f77dbb4bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.157070 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(9be73bef5641caf372f32472b640b4e1b333b1f00f320e2fe3f67e8f77dbb4bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.157148 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators(79bf21e6-60c9-4788-a02f-8efb828dc8ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators(79bf21e6-60c9-4788-a02f-8efb828dc8ef)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(9be73bef5641caf372f32472b640b4e1b333b1f00f320e2fe3f67e8f77dbb4bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" podUID="79bf21e6-60c9-4788-a02f-8efb828dc8ef" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.168637 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jpnk2"] Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.169445 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.171160 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-8954r" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.218901 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.229470 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.230450 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4q24b\" (UID: \"6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8\") " pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.230586 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vc7r\" (UniqueName: \"kubernetes.io/projected/6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8-kube-api-access-7vc7r\") pod \"observability-operator-59bdc8b94-4q24b\" (UID: \"6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8\") " pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.247107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4q24b\" (UID: \"6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8\") " pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.251378 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vc7r\" (UniqueName: \"kubernetes.io/projected/6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8-kube-api-access-7vc7r\") pod \"observability-operator-59bdc8b94-4q24b\" (UID: \"6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8\") " pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.259009 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(f7592b0baed2b43fe5a2dd878daebce2c47f39568d64f65e55fa24726028b200): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.259067 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(f7592b0baed2b43fe5a2dd878daebce2c47f39568d64f65e55fa24726028b200): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.259087 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(f7592b0baed2b43fe5a2dd878daebce2c47f39568d64f65e55fa24726028b200): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.259144 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators(111775fe-ccc4-4b93-9fcf-5a9bd115788c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators(111775fe-ccc4-4b93-9fcf-5a9bd115788c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(f7592b0baed2b43fe5a2dd878daebce2c47f39568d64f65e55fa24726028b200): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" podUID="111775fe-ccc4-4b93-9fcf-5a9bd115788c" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.271989 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(17ceab9a48ebcbbded769768821a79eb17e4a08a5f158f73919a00f83b9474a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.272044 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(17ceab9a48ebcbbded769768821a79eb17e4a08a5f158f73919a00f83b9474a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.272065 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(17ceab9a48ebcbbded769768821a79eb17e4a08a5f158f73919a00f83b9474a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.272100 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators(22284904-7391-4eb6-9ef7-adf068c3d7ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators(22284904-7391-4eb6-9ef7-adf068c3d7ec)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(17ceab9a48ebcbbded769768821a79eb17e4a08a5f158f73919a00f83b9474a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" podUID="22284904-7391-4eb6-9ef7-adf068c3d7ec" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.332094 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jpnk2\" (UID: \"2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3\") " pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.332227 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgbwx\" (UniqueName: \"kubernetes.io/projected/2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3-kube-api-access-vgbwx\") pod \"perses-operator-5bf474d74f-jpnk2\" (UID: \"2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3\") " pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.390478 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.411302 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(cf8021efc32e5e98f2348e5c079ff47e7026fd22979748542736484daf27923a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.411373 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(cf8021efc32e5e98f2348e5c079ff47e7026fd22979748542736484daf27923a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.411393 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(cf8021efc32e5e98f2348e5c079ff47e7026fd22979748542736484daf27923a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.411432 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-4q24b_openshift-operators(6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-4q24b_openshift-operators(6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(cf8021efc32e5e98f2348e5c079ff47e7026fd22979748542736484daf27923a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" podUID="6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.433592 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbwx\" (UniqueName: \"kubernetes.io/projected/2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3-kube-api-access-vgbwx\") pod \"perses-operator-5bf474d74f-jpnk2\" (UID: \"2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3\") " pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.433655 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jpnk2\" (UID: \"2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3\") " pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.434505 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jpnk2\" (UID: \"2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3\") " pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.449584 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbwx\" (UniqueName: \"kubernetes.io/projected/2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3-kube-api-access-vgbwx\") pod \"perses-operator-5bf474d74f-jpnk2\" (UID: \"2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3\") " pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: I0216 21:05:24.497772 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.533203 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-jpnk2_openshift-operators_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3_0(909d6297f968f838ef6c88771fcc4e6cd2ea08821bb70e494c2380fd7708b60a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.533275 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-jpnk2_openshift-operators_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3_0(909d6297f968f838ef6c88771fcc4e6cd2ea08821bb70e494c2380fd7708b60a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.533296 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-jpnk2_openshift-operators_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3_0(909d6297f968f838ef6c88771fcc4e6cd2ea08821bb70e494c2380fd7708b60a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:24 crc kubenswrapper[4805]: E0216 21:05:24.533350 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-jpnk2_openshift-operators(2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-jpnk2_openshift-operators(2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-jpnk2_openshift-operators_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3_0(909d6297f968f838ef6c88771fcc4e6cd2ea08821bb70e494c2380fd7708b60a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" podUID="2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3" Feb 16 21:05:25 crc kubenswrapper[4805]: I0216 21:05:25.882983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" event={"ID":"3b847ea7-fefb-4e18-9cd5-3218724a7f3b","Type":"ContainerStarted","Data":"a67c5b884813c4fccc92fdea4b541f9d3e599ed29089b6f89e9ba36580acbd0c"} Feb 16 21:05:25 crc kubenswrapper[4805]: I0216 21:05:25.883696 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:25 crc kubenswrapper[4805]: I0216 21:05:25.934185 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:25 crc kubenswrapper[4805]: I0216 21:05:25.955395 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" podStartSLOduration=7.95537825 podStartE2EDuration="7.95537825s" podCreationTimestamp="2026-02-16 21:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:05:25.947422656 +0000 UTC m=+543.766105951" watchObservedRunningTime="2026-02-16 21:05:25.95537825 +0000 UTC m=+543.774061545" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.099810 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jpnk2"] Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.099924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.100447 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.105672 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2"] Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.105820 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.106317 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.120834 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4"] Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.120945 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.121372 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.148770 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4q24b"] Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.148883 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.149389 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.160281 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-jpnk2_openshift-operators_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3_0(3a9247f8cd8fe14ffd6270977936214754d5506a0104c8941dc438a365b21b16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.160361 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-jpnk2_openshift-operators_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3_0(3a9247f8cd8fe14ffd6270977936214754d5506a0104c8941dc438a365b21b16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.160387 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-jpnk2_openshift-operators_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3_0(3a9247f8cd8fe14ffd6270977936214754d5506a0104c8941dc438a365b21b16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.160456 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-jpnk2_openshift-operators(2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-jpnk2_openshift-operators(2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-jpnk2_openshift-operators_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3_0(3a9247f8cd8fe14ffd6270977936214754d5506a0104c8941dc438a365b21b16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" podUID="2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.166901 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(64a556f32f53aeb630ae9d501e66764aea998220e6b29d8cdd0967d63c837dab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.166970 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(64a556f32f53aeb630ae9d501e66764aea998220e6b29d8cdd0967d63c837dab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.166993 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(64a556f32f53aeb630ae9d501e66764aea998220e6b29d8cdd0967d63c837dab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.167053 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators(22284904-7391-4eb6-9ef7-adf068c3d7ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators(22284904-7391-4eb6-9ef7-adf068c3d7ec)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(64a556f32f53aeb630ae9d501e66764aea998220e6b29d8cdd0967d63c837dab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" podUID="22284904-7391-4eb6-9ef7-adf068c3d7ec" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.173245 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv"] Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.173360 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.173771 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.183459 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(894480d2ab333cb37447416677b441826cb7cb23691d63641d2d96f938d11a4e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.183527 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(894480d2ab333cb37447416677b441826cb7cb23691d63641d2d96f938d11a4e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.183548 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(894480d2ab333cb37447416677b441826cb7cb23691d63641d2d96f938d11a4e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.183589 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators(79bf21e6-60c9-4788-a02f-8efb828dc8ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators(79bf21e6-60c9-4788-a02f-8efb828dc8ef)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(894480d2ab333cb37447416677b441826cb7cb23691d63641d2d96f938d11a4e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" podUID="79bf21e6-60c9-4788-a02f-8efb828dc8ef" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.231710 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(e52a1b4524e1ddd382ac948020999407af265b68eaee58be60035c46b8957a30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.231791 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(e52a1b4524e1ddd382ac948020999407af265b68eaee58be60035c46b8957a30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.231813 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(e52a1b4524e1ddd382ac948020999407af265b68eaee58be60035c46b8957a30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.231869 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-4q24b_openshift-operators(6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-4q24b_openshift-operators(6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(e52a1b4524e1ddd382ac948020999407af265b68eaee58be60035c46b8957a30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" podUID="6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.237090 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(e6237b074c1acd8c1218c954c552883917157f6badaee0cf8135db41573ca291): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.237154 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(e6237b074c1acd8c1218c954c552883917157f6badaee0cf8135db41573ca291): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.237176 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(e6237b074c1acd8c1218c954c552883917157f6badaee0cf8135db41573ca291): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:26 crc kubenswrapper[4805]: E0216 21:05:26.237230 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators(111775fe-ccc4-4b93-9fcf-5a9bd115788c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators(111775fe-ccc4-4b93-9fcf-5a9bd115788c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(e6237b074c1acd8c1218c954c552883917157f6badaee0cf8135db41573ca291): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" podUID="111775fe-ccc4-4b93-9fcf-5a9bd115788c" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.889816 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.889878 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:26 crc kubenswrapper[4805]: I0216 21:05:26.937787 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:28 crc kubenswrapper[4805]: I0216 21:05:28.597239 4805 scope.go:117] "RemoveContainer" containerID="525eb5bad3094f13416cbe9634fedc7514417458399df4d37ede4cc0a0909ad2" Feb 16 21:05:28 crc kubenswrapper[4805]: E0216 21:05:28.597811 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8qwfz_openshift-multus(7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2)\"" pod="openshift-multus/multus-8qwfz" podUID="7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2" Feb 16 21:05:36 crc kubenswrapper[4805]: I0216 21:05:36.597300 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:36 crc kubenswrapper[4805]: I0216 21:05:36.597363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:36 crc kubenswrapper[4805]: I0216 21:05:36.598240 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:36 crc kubenswrapper[4805]: I0216 21:05:36.598241 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:36 crc kubenswrapper[4805]: E0216 21:05:36.641338 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(84d637f868193a536d84942c892ff3e0904bf3575fe70aa81a871a3a696c4f06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:36 crc kubenswrapper[4805]: E0216 21:05:36.641402 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(84d637f868193a536d84942c892ff3e0904bf3575fe70aa81a871a3a696c4f06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:36 crc kubenswrapper[4805]: E0216 21:05:36.641425 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(84d637f868193a536d84942c892ff3e0904bf3575fe70aa81a871a3a696c4f06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:36 crc kubenswrapper[4805]: E0216 21:05:36.641466 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators(111775fe-ccc4-4b93-9fcf-5a9bd115788c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators(111775fe-ccc4-4b93-9fcf-5a9bd115788c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_openshift-operators_111775fe-ccc4-4b93-9fcf-5a9bd115788c_0(84d637f868193a536d84942c892ff3e0904bf3575fe70aa81a871a3a696c4f06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" podUID="111775fe-ccc4-4b93-9fcf-5a9bd115788c" Feb 16 21:05:36 crc kubenswrapper[4805]: E0216 21:05:36.676414 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(f79f197874ebe6f178fee669e7b62c2cb08e56e88bbb4633b53bd4e4a8832574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:36 crc kubenswrapper[4805]: E0216 21:05:36.676480 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(f79f197874ebe6f178fee669e7b62c2cb08e56e88bbb4633b53bd4e4a8832574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:36 crc kubenswrapper[4805]: E0216 21:05:36.676502 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(f79f197874ebe6f178fee669e7b62c2cb08e56e88bbb4633b53bd4e4a8832574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:36 crc kubenswrapper[4805]: E0216 21:05:36.676543 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators(22284904-7391-4eb6-9ef7-adf068c3d7ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators(22284904-7391-4eb6-9ef7-adf068c3d7ec)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_openshift-operators_22284904-7391-4eb6-9ef7-adf068c3d7ec_0(f79f197874ebe6f178fee669e7b62c2cb08e56e88bbb4633b53bd4e4a8832574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" podUID="22284904-7391-4eb6-9ef7-adf068c3d7ec" Feb 16 21:05:38 crc kubenswrapper[4805]: I0216 21:05:38.099536 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:05:38 crc kubenswrapper[4805]: I0216 21:05:38.099614 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:05:39 crc kubenswrapper[4805]: I0216 21:05:39.597906 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:39 crc kubenswrapper[4805]: I0216 21:05:39.598333 4805 scope.go:117] "RemoveContainer" containerID="525eb5bad3094f13416cbe9634fedc7514417458399df4d37ede4cc0a0909ad2" Feb 16 21:05:39 crc kubenswrapper[4805]: I0216 21:05:39.598831 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:39 crc kubenswrapper[4805]: E0216 21:05:39.628378 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(e97f8dbc3fef913959b8fb9fcc7ab0ed46c1bf5fc7ec843a4d0dd2c62537f739): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:39 crc kubenswrapper[4805]: E0216 21:05:39.628445 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(e97f8dbc3fef913959b8fb9fcc7ab0ed46c1bf5fc7ec843a4d0dd2c62537f739): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:39 crc kubenswrapper[4805]: E0216 21:05:39.628466 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(e97f8dbc3fef913959b8fb9fcc7ab0ed46c1bf5fc7ec843a4d0dd2c62537f739): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:39 crc kubenswrapper[4805]: E0216 21:05:39.628538 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators(79bf21e6-60c9-4788-a02f-8efb828dc8ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators(79bf21e6-60c9-4788-a02f-8efb828dc8ef)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6g7x4_openshift-operators_79bf21e6-60c9-4788-a02f-8efb828dc8ef_0(e97f8dbc3fef913959b8fb9fcc7ab0ed46c1bf5fc7ec843a4d0dd2c62537f739): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" podUID="79bf21e6-60c9-4788-a02f-8efb828dc8ef" Feb 16 21:05:39 crc kubenswrapper[4805]: I0216 21:05:39.986437 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8qwfz_7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2/kube-multus/2.log" Feb 16 21:05:39 crc kubenswrapper[4805]: I0216 21:05:39.986492 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qwfz" event={"ID":"7c4f2ac8-1ae6-4215-8155-ea8cd17f07f2","Type":"ContainerStarted","Data":"549b2714422f52499dfe0248317c6e29cc75ae9b5a27d6e92455efc3ec06a3f9"} Feb 16 21:05:40 crc kubenswrapper[4805]: I0216 21:05:40.597027 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:40 crc kubenswrapper[4805]: I0216 21:05:40.597878 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:40 crc kubenswrapper[4805]: E0216 21:05:40.639008 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(57d8353dcac9a0d5e3f47a770584a518148eada5f2395eff2a693e89073a38f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:05:40 crc kubenswrapper[4805]: E0216 21:05:40.639091 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(57d8353dcac9a0d5e3f47a770584a518148eada5f2395eff2a693e89073a38f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:40 crc kubenswrapper[4805]: E0216 21:05:40.639121 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(57d8353dcac9a0d5e3f47a770584a518148eada5f2395eff2a693e89073a38f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:40 crc kubenswrapper[4805]: E0216 21:05:40.639183 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-4q24b_openshift-operators(6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-4q24b_openshift-operators(6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4q24b_openshift-operators_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8_0(57d8353dcac9a0d5e3f47a770584a518148eada5f2395eff2a693e89073a38f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" podUID="6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8" Feb 16 21:05:41 crc kubenswrapper[4805]: I0216 21:05:41.597851 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:41 crc kubenswrapper[4805]: I0216 21:05:41.598938 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:41 crc kubenswrapper[4805]: I0216 21:05:41.838636 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jpnk2"] Feb 16 21:05:41 crc kubenswrapper[4805]: W0216 21:05:41.847465 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aa5e9f0_6cd0_4b5b_a0c0_25b4774156f3.slice/crio-c36bbfe03f109e4cf1ff458474c9632bb075e2e37773e1797e83d44b81a45966 WatchSource:0}: Error finding container c36bbfe03f109e4cf1ff458474c9632bb075e2e37773e1797e83d44b81a45966: Status 404 returned error can't find the container with id c36bbfe03f109e4cf1ff458474c9632bb075e2e37773e1797e83d44b81a45966 Feb 16 21:05:41 crc kubenswrapper[4805]: I0216 21:05:41.998235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" event={"ID":"2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3","Type":"ContainerStarted","Data":"c36bbfe03f109e4cf1ff458474c9632bb075e2e37773e1797e83d44b81a45966"} Feb 16 21:05:47 crc kubenswrapper[4805]: I0216 21:05:47.026343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" event={"ID":"2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3","Type":"ContainerStarted","Data":"1fa39acf4a31334ea7bfa79af66df3d836cc627ab1338c0b063ca9023dcfa0bf"} Feb 16 21:05:47 crc kubenswrapper[4805]: I0216 21:05:47.026945 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:47 crc kubenswrapper[4805]: I0216 21:05:47.043129 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" podStartSLOduration=18.053338921 podStartE2EDuration="23.043094136s" podCreationTimestamp="2026-02-16 21:05:24 +0000 UTC" firstStartedPulling="2026-02-16 21:05:41.852496024 +0000 UTC m=+559.671179319" lastFinishedPulling="2026-02-16 21:05:46.842251239 +0000 UTC m=+564.660934534" observedRunningTime="2026-02-16 21:05:47.042334324 +0000 UTC m=+564.861017619" watchObservedRunningTime="2026-02-16 21:05:47.043094136 +0000 UTC m=+564.861777431" Feb 16 21:05:48 crc kubenswrapper[4805]: I0216 21:05:48.753739 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mgwvp" Feb 16 21:05:49 crc kubenswrapper[4805]: I0216 21:05:49.597835 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:49 crc kubenswrapper[4805]: I0216 21:05:49.598556 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" Feb 16 21:05:49 crc kubenswrapper[4805]: I0216 21:05:49.899685 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2"] Feb 16 21:05:49 crc kubenswrapper[4805]: W0216 21:05:49.903043 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22284904_7391_4eb6_9ef7_adf068c3d7ec.slice/crio-3fc69df97485aeaf7aef7158854ab3fe307035f32ea59b78e4352df8a46acd19 WatchSource:0}: Error finding container 3fc69df97485aeaf7aef7158854ab3fe307035f32ea59b78e4352df8a46acd19: Status 404 returned error can't find the container with id 3fc69df97485aeaf7aef7158854ab3fe307035f32ea59b78e4352df8a46acd19 Feb 16 21:05:50 crc kubenswrapper[4805]: I0216 21:05:50.044174 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" event={"ID":"22284904-7391-4eb6-9ef7-adf068c3d7ec","Type":"ContainerStarted","Data":"3fc69df97485aeaf7aef7158854ab3fe307035f32ea59b78e4352df8a46acd19"} Feb 16 21:05:51 crc kubenswrapper[4805]: I0216 21:05:51.597299 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:51 crc kubenswrapper[4805]: I0216 21:05:51.597321 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:51 crc kubenswrapper[4805]: I0216 21:05:51.598042 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" Feb 16 21:05:51 crc kubenswrapper[4805]: I0216 21:05:51.598278 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:52 crc kubenswrapper[4805]: I0216 21:05:52.301609 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4q24b"] Feb 16 21:05:52 crc kubenswrapper[4805]: I0216 21:05:52.344675 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv"] Feb 16 21:05:52 crc kubenswrapper[4805]: W0216 21:05:52.364128 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod111775fe_ccc4_4b93_9fcf_5a9bd115788c.slice/crio-18dcaf33c1f53efe3ed24aef1916d5ed1870d8c30c4c2980e0e1ef69e3434b39 WatchSource:0}: Error finding container 18dcaf33c1f53efe3ed24aef1916d5ed1870d8c30c4c2980e0e1ef69e3434b39: Status 404 returned error can't find the container with id 18dcaf33c1f53efe3ed24aef1916d5ed1870d8c30c4c2980e0e1ef69e3434b39 Feb 16 21:05:53 crc kubenswrapper[4805]: I0216 21:05:53.068838 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" event={"ID":"6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8","Type":"ContainerStarted","Data":"164bf705a7d3593427ab6fd7de39e8a022f417013ca6f0e3043c725ade5360b9"} Feb 16 21:05:53 crc kubenswrapper[4805]: I0216 21:05:53.070452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" event={"ID":"111775fe-ccc4-4b93-9fcf-5a9bd115788c","Type":"ContainerStarted","Data":"1b2bb64491a41b521f99155350cf69550741d219699e9a44cec1f8431a628ea2"} Feb 16 21:05:53 crc kubenswrapper[4805]: I0216 21:05:53.070509 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" event={"ID":"111775fe-ccc4-4b93-9fcf-5a9bd115788c","Type":"ContainerStarted","Data":"18dcaf33c1f53efe3ed24aef1916d5ed1870d8c30c4c2980e0e1ef69e3434b39"} Feb 16 21:05:53 crc kubenswrapper[4805]: I0216 21:05:53.071954 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" event={"ID":"22284904-7391-4eb6-9ef7-adf068c3d7ec","Type":"ContainerStarted","Data":"fdd9525378351b7be8a5d4cd468742befb7c3124a798925a164bf43eeaea16d2"} Feb 16 21:05:53 crc kubenswrapper[4805]: I0216 21:05:53.105042 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-2gttv" podStartSLOduration=30.105016079 podStartE2EDuration="30.105016079s" podCreationTimestamp="2026-02-16 21:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:05:53.089964836 +0000 UTC m=+570.908648141" watchObservedRunningTime="2026-02-16 21:05:53.105016079 +0000 UTC m=+570.923699384" Feb 16 21:05:53 crc kubenswrapper[4805]: I0216 21:05:53.130788 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-846988ff46-cvjn2" podStartSLOduration=27.80374434 podStartE2EDuration="30.130768374s" podCreationTimestamp="2026-02-16 21:05:23 +0000 UTC" firstStartedPulling="2026-02-16 21:05:49.906156047 +0000 UTC m=+567.724839332" lastFinishedPulling="2026-02-16 21:05:52.233180051 +0000 UTC m=+570.051863366" observedRunningTime="2026-02-16 21:05:53.12139197 +0000 UTC m=+570.940075305" watchObservedRunningTime="2026-02-16 21:05:53.130768374 +0000 UTC m=+570.949451679" Feb 16 21:05:53 crc kubenswrapper[4805]: I0216 21:05:53.597811 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:53 crc kubenswrapper[4805]: I0216 21:05:53.604602 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" Feb 16 21:05:53 crc kubenswrapper[4805]: I0216 21:05:53.943272 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4"] Feb 16 21:05:53 crc kubenswrapper[4805]: W0216 21:05:53.945617 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79bf21e6_60c9_4788_a02f_8efb828dc8ef.slice/crio-45d35066b175fc950d6568a0a94fa3a246704873f8c48326a2bc7eefd96eddf4 WatchSource:0}: Error finding container 45d35066b175fc950d6568a0a94fa3a246704873f8c48326a2bc7eefd96eddf4: Status 404 returned error can't find the container with id 45d35066b175fc950d6568a0a94fa3a246704873f8c48326a2bc7eefd96eddf4 Feb 16 21:05:54 crc kubenswrapper[4805]: I0216 21:05:54.078926 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" event={"ID":"79bf21e6-60c9-4788-a02f-8efb828dc8ef","Type":"ContainerStarted","Data":"45d35066b175fc950d6568a0a94fa3a246704873f8c48326a2bc7eefd96eddf4"} Feb 16 21:05:54 crc kubenswrapper[4805]: I0216 21:05:54.500609 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-jpnk2" Feb 16 21:05:59 crc kubenswrapper[4805]: I0216 21:05:59.123633 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" event={"ID":"79bf21e6-60c9-4788-a02f-8efb828dc8ef","Type":"ContainerStarted","Data":"a34eca2dd9c9d196959a04a0d206add757451faa993696d4db797847caac3200"} Feb 16 21:05:59 crc kubenswrapper[4805]: I0216 21:05:59.127060 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" event={"ID":"6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8","Type":"ContainerStarted","Data":"0ae3461ae411c20b06e190c19f1045d8932a92b4eef562b0d057345ba2175ce8"} Feb 16 21:05:59 crc kubenswrapper[4805]: I0216 21:05:59.127401 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:59 crc kubenswrapper[4805]: I0216 21:05:59.130537 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" Feb 16 21:05:59 crc kubenswrapper[4805]: I0216 21:05:59.149764 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g7x4" podStartSLOduration=32.077099564 podStartE2EDuration="36.149698288s" podCreationTimestamp="2026-02-16 21:05:23 +0000 UTC" firstStartedPulling="2026-02-16 21:05:53.947714148 +0000 UTC m=+571.766397443" lastFinishedPulling="2026-02-16 21:05:58.020312872 +0000 UTC m=+575.838996167" observedRunningTime="2026-02-16 21:05:59.146661493 +0000 UTC m=+576.965344818" watchObservedRunningTime="2026-02-16 21:05:59.149698288 +0000 UTC m=+576.968381623" Feb 16 21:06:08 crc kubenswrapper[4805]: I0216 21:06:08.100072 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:06:08 crc kubenswrapper[4805]: I0216 21:06:08.100688 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:06:08 crc kubenswrapper[4805]: I0216 21:06:08.100776 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:06:08 crc kubenswrapper[4805]: I0216 21:06:08.101796 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ceb0b80c1374cd4ccb9dd0d277e234416e87b41fce8125fe9f568455202c275d"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:06:08 crc kubenswrapper[4805]: I0216 21:06:08.101897 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://ceb0b80c1374cd4ccb9dd0d277e234416e87b41fce8125fe9f568455202c275d" gracePeriod=600 Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.222981 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="ceb0b80c1374cd4ccb9dd0d277e234416e87b41fce8125fe9f568455202c275d" exitCode=0 Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.223036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"ceb0b80c1374cd4ccb9dd0d277e234416e87b41fce8125fe9f568455202c275d"} Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.224368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"2ea6a527da3d45efcd7fbad2ab314c9a6cf5f646dedd04c29a2b897c9c0a84d1"} Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.224418 4805 scope.go:117] "RemoveContainer" containerID="5d5aa7da8c088ddcac44286336170e6647dae110a6c4f871ef29f7ab0795c9ec" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.253783 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-4q24b" podStartSLOduration=39.530516127 podStartE2EDuration="45.25375941s" podCreationTimestamp="2026-02-16 21:05:24 +0000 UTC" firstStartedPulling="2026-02-16 21:05:52.318873961 +0000 UTC m=+570.137557256" lastFinishedPulling="2026-02-16 21:05:58.042117244 +0000 UTC m=+575.860800539" observedRunningTime="2026-02-16 21:05:59.179036403 +0000 UTC m=+576.997719708" watchObservedRunningTime="2026-02-16 21:06:09.25375941 +0000 UTC m=+587.072442715" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.574570 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-7g79d"] Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.576088 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7g79d" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.578445 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.578482 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-t4h68" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.578704 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.580983 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg"] Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.581838 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.583130 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6z4nz" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.590523 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-78zrw"] Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.592095 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.595665 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5gkdp" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.610758 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7g79d"] Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.617059 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg"] Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.627475 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-78zrw"] Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.653519 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrls\" (UniqueName: \"kubernetes.io/projected/d8c90994-bbc1-48cc-8663-0fee9997a85c-kube-api-access-wrrls\") pod \"cert-manager-858654f9db-7g79d\" (UID: \"d8c90994-bbc1-48cc-8663-0fee9997a85c\") " pod="cert-manager/cert-manager-858654f9db-7g79d" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.653647 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9ngf\" (UniqueName: \"kubernetes.io/projected/7854ef0f-6654-4e1d-960f-3accb2997f48-kube-api-access-s9ngf\") pod \"cert-manager-cainjector-cf98fcc89-pcjsg\" (UID: \"7854ef0f-6654-4e1d-960f-3accb2997f48\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.653678 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8f9t\" (UniqueName: \"kubernetes.io/projected/73c1222f-9f42-429a-8764-0193764d37bb-kube-api-access-x8f9t\") pod \"cert-manager-webhook-687f57d79b-78zrw\" (UID: \"73c1222f-9f42-429a-8764-0193764d37bb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.755997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9ngf\" (UniqueName: \"kubernetes.io/projected/7854ef0f-6654-4e1d-960f-3accb2997f48-kube-api-access-s9ngf\") pod \"cert-manager-cainjector-cf98fcc89-pcjsg\" (UID: \"7854ef0f-6654-4e1d-960f-3accb2997f48\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.756070 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8f9t\" (UniqueName: \"kubernetes.io/projected/73c1222f-9f42-429a-8764-0193764d37bb-kube-api-access-x8f9t\") pod \"cert-manager-webhook-687f57d79b-78zrw\" (UID: \"73c1222f-9f42-429a-8764-0193764d37bb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.756143 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrrls\" (UniqueName: \"kubernetes.io/projected/d8c90994-bbc1-48cc-8663-0fee9997a85c-kube-api-access-wrrls\") pod \"cert-manager-858654f9db-7g79d\" (UID: \"d8c90994-bbc1-48cc-8663-0fee9997a85c\") " pod="cert-manager/cert-manager-858654f9db-7g79d" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.773218 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9ngf\" (UniqueName: \"kubernetes.io/projected/7854ef0f-6654-4e1d-960f-3accb2997f48-kube-api-access-s9ngf\") pod \"cert-manager-cainjector-cf98fcc89-pcjsg\" (UID: \"7854ef0f-6654-4e1d-960f-3accb2997f48\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.773259 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrrls\" (UniqueName: \"kubernetes.io/projected/d8c90994-bbc1-48cc-8663-0fee9997a85c-kube-api-access-wrrls\") pod \"cert-manager-858654f9db-7g79d\" (UID: \"d8c90994-bbc1-48cc-8663-0fee9997a85c\") " pod="cert-manager/cert-manager-858654f9db-7g79d" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.774149 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8f9t\" (UniqueName: \"kubernetes.io/projected/73c1222f-9f42-429a-8764-0193764d37bb-kube-api-access-x8f9t\") pod \"cert-manager-webhook-687f57d79b-78zrw\" (UID: \"73c1222f-9f42-429a-8764-0193764d37bb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.894853 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7g79d" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.903618 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg" Feb 16 21:06:09 crc kubenswrapper[4805]: I0216 21:06:09.920149 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" Feb 16 21:06:10 crc kubenswrapper[4805]: I0216 21:06:10.120171 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg"] Feb 16 21:06:10 crc kubenswrapper[4805]: I0216 21:06:10.232390 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg" event={"ID":"7854ef0f-6654-4e1d-960f-3accb2997f48","Type":"ContainerStarted","Data":"15a3f42bc6dc92afc53007c16794ffe4281f04c61f603164246d6cd3b45074a3"} Feb 16 21:06:10 crc kubenswrapper[4805]: W0216 21:06:10.408829 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8c90994_bbc1_48cc_8663_0fee9997a85c.slice/crio-8989d078d7c3a84f59238c9354079a15161c187d73fcba3baaed39562679624e WatchSource:0}: Error finding container 8989d078d7c3a84f59238c9354079a15161c187d73fcba3baaed39562679624e: Status 404 returned error can't find the container with id 8989d078d7c3a84f59238c9354079a15161c187d73fcba3baaed39562679624e Feb 16 21:06:10 crc kubenswrapper[4805]: I0216 21:06:10.409669 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7g79d"] Feb 16 21:06:10 crc kubenswrapper[4805]: I0216 21:06:10.429641 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-78zrw"] Feb 16 21:06:11 crc kubenswrapper[4805]: I0216 21:06:11.250304 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7g79d" event={"ID":"d8c90994-bbc1-48cc-8663-0fee9997a85c","Type":"ContainerStarted","Data":"8989d078d7c3a84f59238c9354079a15161c187d73fcba3baaed39562679624e"} Feb 16 21:06:11 crc kubenswrapper[4805]: I0216 21:06:11.252111 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" event={"ID":"73c1222f-9f42-429a-8764-0193764d37bb","Type":"ContainerStarted","Data":"a00134be6a192acc039b8a2b1cb8057d6eb7bfa4e4812f52694e588724c4a4ac"} Feb 16 21:06:13 crc kubenswrapper[4805]: I0216 21:06:13.269219 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg" event={"ID":"7854ef0f-6654-4e1d-960f-3accb2997f48","Type":"ContainerStarted","Data":"a02a6253579484e7cece272d5c9463bed97c43c7f800bd123f6bd5a2eac4dda4"} Feb 16 21:06:13 crc kubenswrapper[4805]: I0216 21:06:13.290800 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pcjsg" podStartSLOduration=1.870836766 podStartE2EDuration="4.290783392s" podCreationTimestamp="2026-02-16 21:06:09 +0000 UTC" firstStartedPulling="2026-02-16 21:06:10.137505242 +0000 UTC m=+587.956188537" lastFinishedPulling="2026-02-16 21:06:12.557451868 +0000 UTC m=+590.376135163" observedRunningTime="2026-02-16 21:06:13.285535104 +0000 UTC m=+591.104218399" watchObservedRunningTime="2026-02-16 21:06:13.290783392 +0000 UTC m=+591.109466687" Feb 16 21:06:15 crc kubenswrapper[4805]: I0216 21:06:15.284080 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7g79d" event={"ID":"d8c90994-bbc1-48cc-8663-0fee9997a85c","Type":"ContainerStarted","Data":"ec73729d4bd58ba04b8806bbfb229af29dd6d8f64e55c0e6019f3ddf8c7be441"} Feb 16 21:06:15 crc kubenswrapper[4805]: I0216 21:06:15.285287 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" event={"ID":"73c1222f-9f42-429a-8764-0193764d37bb","Type":"ContainerStarted","Data":"47c7e3ecb50b0e4ef5c2b7bdd0bd7f0b4dd8acbbc7d832386d5811582a3bc767"} Feb 16 21:06:15 crc kubenswrapper[4805]: I0216 21:06:15.285604 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" Feb 16 21:06:15 crc kubenswrapper[4805]: I0216 21:06:15.308144 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-7g79d" podStartSLOduration=2.198398854 podStartE2EDuration="6.308127371s" podCreationTimestamp="2026-02-16 21:06:09 +0000 UTC" firstStartedPulling="2026-02-16 21:06:10.413067418 +0000 UTC m=+588.231750743" lastFinishedPulling="2026-02-16 21:06:14.522795955 +0000 UTC m=+592.341479260" observedRunningTime="2026-02-16 21:06:15.306820684 +0000 UTC m=+593.125503979" watchObservedRunningTime="2026-02-16 21:06:15.308127371 +0000 UTC m=+593.126810666" Feb 16 21:06:15 crc kubenswrapper[4805]: I0216 21:06:15.332572 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" podStartSLOduration=2.309758223 podStartE2EDuration="6.332557877s" podCreationTimestamp="2026-02-16 21:06:09 +0000 UTC" firstStartedPulling="2026-02-16 21:06:10.44195655 +0000 UTC m=+588.260639885" lastFinishedPulling="2026-02-16 21:06:14.464756244 +0000 UTC m=+592.283439539" observedRunningTime="2026-02-16 21:06:15.328867004 +0000 UTC m=+593.147550299" watchObservedRunningTime="2026-02-16 21:06:15.332557877 +0000 UTC m=+593.151241172" Feb 16 21:06:19 crc kubenswrapper[4805]: I0216 21:06:19.925632 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-78zrw" Feb 16 21:06:23 crc kubenswrapper[4805]: I0216 21:06:23.954176 4805 scope.go:117] "RemoveContainer" containerID="4c150a14529aa8817e3d8690895f08eb74e07f8880547c9f45798a090d42a9c8" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.583000 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw"] Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.585534 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.587859 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.595283 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw"] Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.665910 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.665992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.666029 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gnm5\" (UniqueName: \"kubernetes.io/projected/6785263a-326f-4912-b4bc-c1cea001e2a9-kube-api-access-4gnm5\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.767408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.767476 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.767508 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gnm5\" (UniqueName: \"kubernetes.io/projected/6785263a-326f-4912-b4bc-c1cea001e2a9-kube-api-access-4gnm5\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.768241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.768368 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.787836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gnm5\" (UniqueName: \"kubernetes.io/projected/6785263a-326f-4912-b4bc-c1cea001e2a9-kube-api-access-4gnm5\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.915890 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.960047 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62"] Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.961190 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:41 crc kubenswrapper[4805]: I0216 21:06:41.973550 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62"] Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.071536 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.071874 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.071927 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7npd\" (UniqueName: \"kubernetes.io/projected/5ede007c-534d-4702-8d57-307734558aff-kube-api-access-d7npd\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.181426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7npd\" (UniqueName: \"kubernetes.io/projected/5ede007c-534d-4702-8d57-307734558aff-kube-api-access-d7npd\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.181520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.181548 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.182332 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.183252 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.183346 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw"] Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.203486 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7npd\" (UniqueName: \"kubernetes.io/projected/5ede007c-534d-4702-8d57-307734558aff-kube-api-access-d7npd\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.311686 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.503748 4805 generic.go:334] "Generic (PLEG): container finished" podID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerID="5fb401d76cf87e5463a08604fc748a2215d784c73362293e3b88d4529e1771ef" exitCode=0 Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.503799 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" event={"ID":"6785263a-326f-4912-b4bc-c1cea001e2a9","Type":"ContainerDied","Data":"5fb401d76cf87e5463a08604fc748a2215d784c73362293e3b88d4529e1771ef"} Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.504024 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" event={"ID":"6785263a-326f-4912-b4bc-c1cea001e2a9","Type":"ContainerStarted","Data":"043f45456a9e52f60a2913f764193999f9e5b5b68955a118d0bcb760c8135c78"} Feb 16 21:06:42 crc kubenswrapper[4805]: I0216 21:06:42.802927 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62"] Feb 16 21:06:42 crc kubenswrapper[4805]: W0216 21:06:42.817043 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ede007c_534d_4702_8d57_307734558aff.slice/crio-539f7bea2d3a6d39b2a3367be1bbfae2a4c9a99228f9d3f9aa1b68b5320f572d WatchSource:0}: Error finding container 539f7bea2d3a6d39b2a3367be1bbfae2a4c9a99228f9d3f9aa1b68b5320f572d: Status 404 returned error can't find the container with id 539f7bea2d3a6d39b2a3367be1bbfae2a4c9a99228f9d3f9aa1b68b5320f572d Feb 16 21:06:43 crc kubenswrapper[4805]: I0216 21:06:43.510081 4805 generic.go:334] "Generic (PLEG): container finished" podID="5ede007c-534d-4702-8d57-307734558aff" containerID="921d2d91637f9bbf78ce442e830a864c8c4a0e915ff29d682dbf719db794b4b7" exitCode=0 Feb 16 21:06:43 crc kubenswrapper[4805]: I0216 21:06:43.510180 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" event={"ID":"5ede007c-534d-4702-8d57-307734558aff","Type":"ContainerDied","Data":"921d2d91637f9bbf78ce442e830a864c8c4a0e915ff29d682dbf719db794b4b7"} Feb 16 21:06:43 crc kubenswrapper[4805]: I0216 21:06:43.510455 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" event={"ID":"5ede007c-534d-4702-8d57-307734558aff","Type":"ContainerStarted","Data":"539f7bea2d3a6d39b2a3367be1bbfae2a4c9a99228f9d3f9aa1b68b5320f572d"} Feb 16 21:06:44 crc kubenswrapper[4805]: I0216 21:06:44.536602 4805 generic.go:334] "Generic (PLEG): container finished" podID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerID="340dfed6226a976c7ccc307fb83c64885deb2497a8fe16e3e931de7a9b42261e" exitCode=0 Feb 16 21:06:44 crc kubenswrapper[4805]: I0216 21:06:44.536694 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" event={"ID":"6785263a-326f-4912-b4bc-c1cea001e2a9","Type":"ContainerDied","Data":"340dfed6226a976c7ccc307fb83c64885deb2497a8fe16e3e931de7a9b42261e"} Feb 16 21:06:45 crc kubenswrapper[4805]: I0216 21:06:45.556110 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" event={"ID":"6785263a-326f-4912-b4bc-c1cea001e2a9","Type":"ContainerStarted","Data":"3fddac1a69df8317af6cb53bd5f3b780e4fc056e80529b82f8a09be6e0be4699"} Feb 16 21:06:45 crc kubenswrapper[4805]: I0216 21:06:45.579530 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" podStartSLOduration=3.310951946 podStartE2EDuration="4.579508885s" podCreationTimestamp="2026-02-16 21:06:41 +0000 UTC" firstStartedPulling="2026-02-16 21:06:42.505221666 +0000 UTC m=+620.323904961" lastFinishedPulling="2026-02-16 21:06:43.773778605 +0000 UTC m=+621.592461900" observedRunningTime="2026-02-16 21:06:45.577780307 +0000 UTC m=+623.396463612" watchObservedRunningTime="2026-02-16 21:06:45.579508885 +0000 UTC m=+623.398192180" Feb 16 21:06:46 crc kubenswrapper[4805]: I0216 21:06:46.566988 4805 generic.go:334] "Generic (PLEG): container finished" podID="5ede007c-534d-4702-8d57-307734558aff" containerID="d3864b84790e1506b4d6d323fa7928d3cfa19453df090812be8a05469f6b9fe6" exitCode=0 Feb 16 21:06:46 crc kubenswrapper[4805]: I0216 21:06:46.567070 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" event={"ID":"5ede007c-534d-4702-8d57-307734558aff","Type":"ContainerDied","Data":"d3864b84790e1506b4d6d323fa7928d3cfa19453df090812be8a05469f6b9fe6"} Feb 16 21:06:46 crc kubenswrapper[4805]: I0216 21:06:46.571179 4805 generic.go:334] "Generic (PLEG): container finished" podID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerID="3fddac1a69df8317af6cb53bd5f3b780e4fc056e80529b82f8a09be6e0be4699" exitCode=0 Feb 16 21:06:46 crc kubenswrapper[4805]: I0216 21:06:46.571213 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" event={"ID":"6785263a-326f-4912-b4bc-c1cea001e2a9","Type":"ContainerDied","Data":"3fddac1a69df8317af6cb53bd5f3b780e4fc056e80529b82f8a09be6e0be4699"} Feb 16 21:06:47 crc kubenswrapper[4805]: I0216 21:06:47.581829 4805 generic.go:334] "Generic (PLEG): container finished" podID="5ede007c-534d-4702-8d57-307734558aff" containerID="5885b93b2ddb351c9adcecdea9c5b673077eadad145bb2df95fefa5acdb6533a" exitCode=0 Feb 16 21:06:47 crc kubenswrapper[4805]: I0216 21:06:47.581904 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" event={"ID":"5ede007c-534d-4702-8d57-307734558aff","Type":"ContainerDied","Data":"5885b93b2ddb351c9adcecdea9c5b673077eadad145bb2df95fefa5acdb6533a"} Feb 16 21:06:47 crc kubenswrapper[4805]: I0216 21:06:47.891401 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:47 crc kubenswrapper[4805]: I0216 21:06:47.961438 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-util\") pod \"6785263a-326f-4912-b4bc-c1cea001e2a9\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " Feb 16 21:06:47 crc kubenswrapper[4805]: I0216 21:06:47.973217 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gnm5\" (UniqueName: \"kubernetes.io/projected/6785263a-326f-4912-b4bc-c1cea001e2a9-kube-api-access-4gnm5\") pod \"6785263a-326f-4912-b4bc-c1cea001e2a9\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " Feb 16 21:06:47 crc kubenswrapper[4805]: I0216 21:06:47.973311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-bundle\") pod \"6785263a-326f-4912-b4bc-c1cea001e2a9\" (UID: \"6785263a-326f-4912-b4bc-c1cea001e2a9\") " Feb 16 21:06:47 crc kubenswrapper[4805]: I0216 21:06:47.974911 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-bundle" (OuterVolumeSpecName: "bundle") pod "6785263a-326f-4912-b4bc-c1cea001e2a9" (UID: "6785263a-326f-4912-b4bc-c1cea001e2a9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:06:47 crc kubenswrapper[4805]: I0216 21:06:47.978646 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6785263a-326f-4912-b4bc-c1cea001e2a9-kube-api-access-4gnm5" (OuterVolumeSpecName: "kube-api-access-4gnm5") pod "6785263a-326f-4912-b4bc-c1cea001e2a9" (UID: "6785263a-326f-4912-b4bc-c1cea001e2a9"). InnerVolumeSpecName "kube-api-access-4gnm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:06:47 crc kubenswrapper[4805]: I0216 21:06:47.980748 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-util" (OuterVolumeSpecName: "util") pod "6785263a-326f-4912-b4bc-c1cea001e2a9" (UID: "6785263a-326f-4912-b4bc-c1cea001e2a9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.075605 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.075648 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6785263a-326f-4912-b4bc-c1cea001e2a9-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.075661 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gnm5\" (UniqueName: \"kubernetes.io/projected/6785263a-326f-4912-b4bc-c1cea001e2a9-kube-api-access-4gnm5\") on node \"crc\" DevicePath \"\"" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.590876 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" event={"ID":"6785263a-326f-4912-b4bc-c1cea001e2a9","Type":"ContainerDied","Data":"043f45456a9e52f60a2913f764193999f9e5b5b68955a118d0bcb760c8135c78"} Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.590932 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043f45456a9e52f60a2913f764193999f9e5b5b68955a118d0bcb760c8135c78" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.590905 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.823149 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.887985 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7npd\" (UniqueName: \"kubernetes.io/projected/5ede007c-534d-4702-8d57-307734558aff-kube-api-access-d7npd\") pod \"5ede007c-534d-4702-8d57-307734558aff\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.888050 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-util\") pod \"5ede007c-534d-4702-8d57-307734558aff\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.888119 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-bundle\") pod \"5ede007c-534d-4702-8d57-307734558aff\" (UID: \"5ede007c-534d-4702-8d57-307734558aff\") " Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.889001 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-bundle" (OuterVolumeSpecName: "bundle") pod "5ede007c-534d-4702-8d57-307734558aff" (UID: "5ede007c-534d-4702-8d57-307734558aff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.892675 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ede007c-534d-4702-8d57-307734558aff-kube-api-access-d7npd" (OuterVolumeSpecName: "kube-api-access-d7npd") pod "5ede007c-534d-4702-8d57-307734558aff" (UID: "5ede007c-534d-4702-8d57-307734558aff"). InnerVolumeSpecName "kube-api-access-d7npd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.897357 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-util" (OuterVolumeSpecName: "util") pod "5ede007c-534d-4702-8d57-307734558aff" (UID: "5ede007c-534d-4702-8d57-307734558aff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.989956 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7npd\" (UniqueName: \"kubernetes.io/projected/5ede007c-534d-4702-8d57-307734558aff-kube-api-access-d7npd\") on node \"crc\" DevicePath \"\"" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.990284 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:06:48 crc kubenswrapper[4805]: I0216 21:06:48.990301 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ede007c-534d-4702-8d57-307734558aff-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:06:49 crc kubenswrapper[4805]: I0216 21:06:49.604334 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" Feb 16 21:06:49 crc kubenswrapper[4805]: I0216 21:06:49.606513 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62" event={"ID":"5ede007c-534d-4702-8d57-307734558aff","Type":"ContainerDied","Data":"539f7bea2d3a6d39b2a3367be1bbfae2a4c9a99228f9d3f9aa1b68b5320f572d"} Feb 16 21:06:49 crc kubenswrapper[4805]: I0216 21:06:49.606561 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="539f7bea2d3a6d39b2a3367be1bbfae2a4c9a99228f9d3f9aa1b68b5320f572d" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.717791 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7"] Feb 16 21:06:58 crc kubenswrapper[4805]: E0216 21:06:58.718520 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerName="util" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.718533 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerName="util" Feb 16 21:06:58 crc kubenswrapper[4805]: E0216 21:06:58.718543 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ede007c-534d-4702-8d57-307734558aff" containerName="extract" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.718549 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ede007c-534d-4702-8d57-307734558aff" containerName="extract" Feb 16 21:06:58 crc kubenswrapper[4805]: E0216 21:06:58.718558 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ede007c-534d-4702-8d57-307734558aff" containerName="pull" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.718564 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ede007c-534d-4702-8d57-307734558aff" containerName="pull" Feb 16 21:06:58 crc kubenswrapper[4805]: E0216 21:06:58.718573 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerName="pull" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.718579 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerName="pull" Feb 16 21:06:58 crc kubenswrapper[4805]: E0216 21:06:58.718595 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerName="extract" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.718601 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerName="extract" Feb 16 21:06:58 crc kubenswrapper[4805]: E0216 21:06:58.718615 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ede007c-534d-4702-8d57-307734558aff" containerName="util" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.718621 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ede007c-534d-4702-8d57-307734558aff" containerName="util" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.718768 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ede007c-534d-4702-8d57-307734558aff" containerName="extract" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.718780 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6785263a-326f-4912-b4bc-c1cea001e2a9" containerName="extract" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.719415 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.721967 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.729619 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.737324 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.737354 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.737365 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-b4mnc" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.737365 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.803817 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7"] Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.879468 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsmv\" (UniqueName: \"kubernetes.io/projected/efa9b8e9-54a3-4740-9f0e-391521f3ed25-kube-api-access-jbsmv\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.879528 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/efa9b8e9-54a3-4740-9f0e-391521f3ed25-manager-config\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.879575 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/efa9b8e9-54a3-4740-9f0e-391521f3ed25-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.879623 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/efa9b8e9-54a3-4740-9f0e-391521f3ed25-webhook-cert\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.879685 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/efa9b8e9-54a3-4740-9f0e-391521f3ed25-apiservice-cert\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.981476 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/efa9b8e9-54a3-4740-9f0e-391521f3ed25-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.981535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/efa9b8e9-54a3-4740-9f0e-391521f3ed25-webhook-cert\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.981560 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/efa9b8e9-54a3-4740-9f0e-391521f3ed25-apiservice-cert\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.981613 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbsmv\" (UniqueName: \"kubernetes.io/projected/efa9b8e9-54a3-4740-9f0e-391521f3ed25-kube-api-access-jbsmv\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.981637 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/efa9b8e9-54a3-4740-9f0e-391521f3ed25-manager-config\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.982392 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/efa9b8e9-54a3-4740-9f0e-391521f3ed25-manager-config\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.987040 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/efa9b8e9-54a3-4740-9f0e-391521f3ed25-apiservice-cert\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.987060 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/efa9b8e9-54a3-4740-9f0e-391521f3ed25-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:58 crc kubenswrapper[4805]: I0216 21:06:58.990260 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/efa9b8e9-54a3-4740-9f0e-391521f3ed25-webhook-cert\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:59 crc kubenswrapper[4805]: I0216 21:06:59.000293 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbsmv\" (UniqueName: \"kubernetes.io/projected/efa9b8e9-54a3-4740-9f0e-391521f3ed25-kube-api-access-jbsmv\") pod \"loki-operator-controller-manager-6c4778c849-ds7n7\" (UID: \"efa9b8e9-54a3-4740-9f0e-391521f3ed25\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:59 crc kubenswrapper[4805]: I0216 21:06:59.042381 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:06:59 crc kubenswrapper[4805]: I0216 21:06:59.259864 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7"] Feb 16 21:06:59 crc kubenswrapper[4805]: I0216 21:06:59.672852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" event={"ID":"efa9b8e9-54a3-4740-9f0e-391521f3ed25","Type":"ContainerStarted","Data":"9365f13f2b0d3819ef48324b46c8aba59273af1f0538d9f9e638ccb8064d9294"} Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.597463 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-gpnwn"] Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.598634 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-gpnwn" Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.600631 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.600900 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-z5lml" Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.601196 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.608084 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-gpnwn"] Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.739791 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lzrx\" (UniqueName: \"kubernetes.io/projected/2d0f2c52-868b-4753-9d83-9d7204ea6d2d-kube-api-access-6lzrx\") pod \"cluster-logging-operator-c769fd969-gpnwn\" (UID: \"2d0f2c52-868b-4753-9d83-9d7204ea6d2d\") " pod="openshift-logging/cluster-logging-operator-c769fd969-gpnwn" Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.841165 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lzrx\" (UniqueName: \"kubernetes.io/projected/2d0f2c52-868b-4753-9d83-9d7204ea6d2d-kube-api-access-6lzrx\") pod \"cluster-logging-operator-c769fd969-gpnwn\" (UID: \"2d0f2c52-868b-4753-9d83-9d7204ea6d2d\") " pod="openshift-logging/cluster-logging-operator-c769fd969-gpnwn" Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.862315 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lzrx\" (UniqueName: \"kubernetes.io/projected/2d0f2c52-868b-4753-9d83-9d7204ea6d2d-kube-api-access-6lzrx\") pod \"cluster-logging-operator-c769fd969-gpnwn\" (UID: \"2d0f2c52-868b-4753-9d83-9d7204ea6d2d\") " pod="openshift-logging/cluster-logging-operator-c769fd969-gpnwn" Feb 16 21:07:02 crc kubenswrapper[4805]: I0216 21:07:02.912562 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-gpnwn" Feb 16 21:07:04 crc kubenswrapper[4805]: I0216 21:07:04.483296 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-gpnwn"] Feb 16 21:07:04 crc kubenswrapper[4805]: W0216 21:07:04.491292 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d0f2c52_868b_4753_9d83_9d7204ea6d2d.slice/crio-2f941752d249f3818a5cef664757954b9b923770e3d0f5d823dc5eb24248bd9d WatchSource:0}: Error finding container 2f941752d249f3818a5cef664757954b9b923770e3d0f5d823dc5eb24248bd9d: Status 404 returned error can't find the container with id 2f941752d249f3818a5cef664757954b9b923770e3d0f5d823dc5eb24248bd9d Feb 16 21:07:04 crc kubenswrapper[4805]: I0216 21:07:04.730061 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" event={"ID":"efa9b8e9-54a3-4740-9f0e-391521f3ed25","Type":"ContainerStarted","Data":"4f1f9dd249fe769c76d732598b8f80663399f08e0d26ed69613114df1e837414"} Feb 16 21:07:04 crc kubenswrapper[4805]: I0216 21:07:04.731360 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-gpnwn" event={"ID":"2d0f2c52-868b-4753-9d83-9d7204ea6d2d","Type":"ContainerStarted","Data":"2f941752d249f3818a5cef664757954b9b923770e3d0f5d823dc5eb24248bd9d"} Feb 16 21:07:13 crc kubenswrapper[4805]: I0216 21:07:13.818318 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" event={"ID":"efa9b8e9-54a3-4740-9f0e-391521f3ed25","Type":"ContainerStarted","Data":"817e67d70439fa807aea11b7ba818e3ba3fa90b7297de9ef3b390f6b3137db95"} Feb 16 21:07:13 crc kubenswrapper[4805]: I0216 21:07:13.818961 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:07:13 crc kubenswrapper[4805]: I0216 21:07:13.820293 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-gpnwn" event={"ID":"2d0f2c52-868b-4753-9d83-9d7204ea6d2d","Type":"ContainerStarted","Data":"06d61a7752960a0a491601396945928a81c2e998be1c043f7bde7629e52f8415"} Feb 16 21:07:13 crc kubenswrapper[4805]: I0216 21:07:13.823382 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" Feb 16 21:07:13 crc kubenswrapper[4805]: I0216 21:07:13.839236 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6c4778c849-ds7n7" podStartSLOduration=2.133678754 podStartE2EDuration="15.839211041s" podCreationTimestamp="2026-02-16 21:06:58 +0000 UTC" firstStartedPulling="2026-02-16 21:06:59.27242339 +0000 UTC m=+637.091106685" lastFinishedPulling="2026-02-16 21:07:12.977955677 +0000 UTC m=+650.796638972" observedRunningTime="2026-02-16 21:07:13.837485767 +0000 UTC m=+651.656169102" watchObservedRunningTime="2026-02-16 21:07:13.839211041 +0000 UTC m=+651.657894366" Feb 16 21:07:13 crc kubenswrapper[4805]: I0216 21:07:13.878238 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-gpnwn" podStartSLOduration=3.396805604 podStartE2EDuration="11.878213553s" podCreationTimestamp="2026-02-16 21:07:02 +0000 UTC" firstStartedPulling="2026-02-16 21:07:04.493508778 +0000 UTC m=+642.312192073" lastFinishedPulling="2026-02-16 21:07:12.974916727 +0000 UTC m=+650.793600022" observedRunningTime="2026-02-16 21:07:13.873311387 +0000 UTC m=+651.691994702" watchObservedRunningTime="2026-02-16 21:07:13.878213553 +0000 UTC m=+651.696896848" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.075780 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.076811 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.079076 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.079651 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.114006 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.186860 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhsrl\" (UniqueName: \"kubernetes.io/projected/89163d01-1bcd-4541-b219-7b5467c2dc5d-kube-api-access-bhsrl\") pod \"minio\" (UID: \"89163d01-1bcd-4541-b219-7b5467c2dc5d\") " pod="minio-dev/minio" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.186904 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da2ec847-2416-4817-984c-0b1fd5ab19e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2ec847-2416-4817-984c-0b1fd5ab19e9\") pod \"minio\" (UID: \"89163d01-1bcd-4541-b219-7b5467c2dc5d\") " pod="minio-dev/minio" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.288904 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhsrl\" (UniqueName: \"kubernetes.io/projected/89163d01-1bcd-4541-b219-7b5467c2dc5d-kube-api-access-bhsrl\") pod \"minio\" (UID: \"89163d01-1bcd-4541-b219-7b5467c2dc5d\") " pod="minio-dev/minio" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.288958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da2ec847-2416-4817-984c-0b1fd5ab19e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2ec847-2416-4817-984c-0b1fd5ab19e9\") pod \"minio\" (UID: \"89163d01-1bcd-4541-b219-7b5467c2dc5d\") " pod="minio-dev/minio" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.291118 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.291147 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da2ec847-2416-4817-984c-0b1fd5ab19e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2ec847-2416-4817-984c-0b1fd5ab19e9\") pod \"minio\" (UID: \"89163d01-1bcd-4541-b219-7b5467c2dc5d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/910a5a5a8f716b49e703ab842298b2b3a43225e9a334dcf813c187265cf1efe6/globalmount\"" pod="minio-dev/minio" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.313202 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhsrl\" (UniqueName: \"kubernetes.io/projected/89163d01-1bcd-4541-b219-7b5467c2dc5d-kube-api-access-bhsrl\") pod \"minio\" (UID: \"89163d01-1bcd-4541-b219-7b5467c2dc5d\") " pod="minio-dev/minio" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.322678 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da2ec847-2416-4817-984c-0b1fd5ab19e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2ec847-2416-4817-984c-0b1fd5ab19e9\") pod \"minio\" (UID: \"89163d01-1bcd-4541-b219-7b5467c2dc5d\") " pod="minio-dev/minio" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.413963 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.624996 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 21:07:19 crc kubenswrapper[4805]: I0216 21:07:19.864468 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"89163d01-1bcd-4541-b219-7b5467c2dc5d","Type":"ContainerStarted","Data":"c52f92e601f750ac66e7fca37295dc4e934adb90ade38054527530b67b9f8eba"} Feb 16 21:07:22 crc kubenswrapper[4805]: I0216 21:07:22.891831 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"89163d01-1bcd-4541-b219-7b5467c2dc5d","Type":"ContainerStarted","Data":"12397119cf2a34eb813b72eaff3efe697b6192d9ebebb4a5009c2e8495615864"} Feb 16 21:07:22 crc kubenswrapper[4805]: I0216 21:07:22.907691 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.054047638 podStartE2EDuration="6.907676643s" podCreationTimestamp="2026-02-16 21:07:16 +0000 UTC" firstStartedPulling="2026-02-16 21:07:19.632695859 +0000 UTC m=+657.451379164" lastFinishedPulling="2026-02-16 21:07:22.486324884 +0000 UTC m=+660.305008169" observedRunningTime="2026-02-16 21:07:22.906152903 +0000 UTC m=+660.724836198" watchObservedRunningTime="2026-02-16 21:07:22.907676643 +0000 UTC m=+660.726359938" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.401505 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.402970 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.407654 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.407991 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.408061 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.407992 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.408189 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-dhdf6" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.414145 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.451444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c52p7\" (UniqueName: \"kubernetes.io/projected/26464e34-2dcc-45f5-a73a-94fd7fa041b8-kube-api-access-c52p7\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.451504 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26464e34-2dcc-45f5-a73a-94fd7fa041b8-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.451533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26464e34-2dcc-45f5-a73a-94fd7fa041b8-config\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.451635 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/26464e34-2dcc-45f5-a73a-94fd7fa041b8-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.451670 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/26464e34-2dcc-45f5-a73a-94fd7fa041b8-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.547883 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-5j28p"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.548613 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.551884 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.551930 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.552560 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52p7\" (UniqueName: \"kubernetes.io/projected/26464e34-2dcc-45f5-a73a-94fd7fa041b8-kube-api-access-c52p7\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.552617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26464e34-2dcc-45f5-a73a-94fd7fa041b8-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.552648 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26464e34-2dcc-45f5-a73a-94fd7fa041b8-config\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.552768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/26464e34-2dcc-45f5-a73a-94fd7fa041b8-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.552806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/26464e34-2dcc-45f5-a73a-94fd7fa041b8-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.554358 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26464e34-2dcc-45f5-a73a-94fd7fa041b8-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.555001 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26464e34-2dcc-45f5-a73a-94fd7fa041b8-config\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.556532 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.560883 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/26464e34-2dcc-45f5-a73a-94fd7fa041b8-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.565362 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/26464e34-2dcc-45f5-a73a-94fd7fa041b8-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.571514 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-5j28p"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.641421 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52p7\" (UniqueName: \"kubernetes.io/projected/26464e34-2dcc-45f5-a73a-94fd7fa041b8-kube-api-access-c52p7\") pod \"logging-loki-distributor-5d5548c9f5-4rxpw\" (UID: \"26464e34-2dcc-45f5-a73a-94fd7fa041b8\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.653858 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.653946 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrht6\" (UniqueName: \"kubernetes.io/projected/5b58cbb2-c2de-4f33-a1ea-344729a67d13-kube-api-access-jrht6\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.653984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.654109 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.654233 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b58cbb2-c2de-4f33-a1ea-344729a67d13-config\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.654308 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.665139 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.671156 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.674939 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.675085 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.683463 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.724812 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.734549 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.735663 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.738797 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.738972 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.739068 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.739217 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.739229 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.743101 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.746375 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.750937 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-25z2k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755478 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-lokistack-gateway\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755506 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-rbac\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755559 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwqw\" (UniqueName: \"kubernetes.io/projected/31af895c-b793-4e0f-bae7-031db6fe786f-kube-api-access-snwqw\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755586 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm85l\" (UniqueName: \"kubernetes.io/projected/c26f79ee-1643-4837-a0af-94910dafc8a7-kube-api-access-pm85l\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/31af895c-b793-4e0f-bae7-031db6fe786f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrht6\" (UniqueName: \"kubernetes.io/projected/5b58cbb2-c2de-4f33-a1ea-344729a67d13-kube-api-access-jrht6\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755749 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755776 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755830 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31af895c-b793-4e0f-bae7-031db6fe786f-config\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755878 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31af895c-b793-4e0f-bae7-031db6fe786f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755900 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c26f79ee-1643-4837-a0af-94910dafc8a7-tls-secret\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755926 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c26f79ee-1643-4837-a0af-94910dafc8a7-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755950 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.755975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b58cbb2-c2de-4f33-a1ea-344729a67d13-config\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.756019 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c26f79ee-1643-4837-a0af-94910dafc8a7-tenants\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.756045 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/31af895c-b793-4e0f-bae7-031db6fe786f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.756078 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.758224 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b58cbb2-c2de-4f33-a1ea-344729a67d13-config\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.759351 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.763485 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.767278 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.769607 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.771892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5b58cbb2-c2de-4f33-a1ea-344729a67d13-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.787513 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrht6\" (UniqueName: \"kubernetes.io/projected/5b58cbb2-c2de-4f33-a1ea-344729a67d13-kube-api-access-jrht6\") pod \"logging-loki-querier-76bf7b6d45-5j28p\" (UID: \"5b58cbb2-c2de-4f33-a1ea-344729a67d13\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.801608 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8"] Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.857602 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c26f79ee-1643-4837-a0af-94910dafc8a7-tenants\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.857924 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/31af895c-b793-4e0f-bae7-031db6fe786f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858545 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-lokistack-gateway\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858564 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-rbac\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858597 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwqw\" (UniqueName: \"kubernetes.io/projected/31af895c-b793-4e0f-bae7-031db6fe786f-kube-api-access-snwqw\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm85l\" (UniqueName: \"kubernetes.io/projected/c26f79ee-1643-4837-a0af-94910dafc8a7-kube-api-access-pm85l\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqd7j\" (UniqueName: \"kubernetes.io/projected/95dd962a-260e-4c6d-9e07-c5b99377f3e5-kube-api-access-qqd7j\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858681 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/95dd962a-260e-4c6d-9e07-c5b99377f3e5-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858700 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/31af895c-b793-4e0f-bae7-031db6fe786f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858745 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-lokistack-gateway\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31af895c-b793-4e0f-bae7-031db6fe786f-config\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858815 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-rbac\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858831 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/95dd962a-260e-4c6d-9e07-c5b99377f3e5-tls-secret\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/95dd962a-260e-4c6d-9e07-c5b99377f3e5-tenants\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31af895c-b793-4e0f-bae7-031db6fe786f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c26f79ee-1643-4837-a0af-94910dafc8a7-tls-secret\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c26f79ee-1643-4837-a0af-94910dafc8a7-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.858933 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.859789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.860618 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31af895c-b793-4e0f-bae7-031db6fe786f-config\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.860773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/31af895c-b793-4e0f-bae7-031db6fe786f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.861923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31af895c-b793-4e0f-bae7-031db6fe786f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.862971 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.863213 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c26f79ee-1643-4837-a0af-94910dafc8a7-tenants\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.863754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-lokistack-gateway\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.863940 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c26f79ee-1643-4837-a0af-94910dafc8a7-rbac\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.866129 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/31af895c-b793-4e0f-bae7-031db6fe786f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.869317 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c26f79ee-1643-4837-a0af-94910dafc8a7-tls-secret\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.869594 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c26f79ee-1643-4837-a0af-94910dafc8a7-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.881336 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm85l\" (UniqueName: \"kubernetes.io/projected/c26f79ee-1643-4837-a0af-94910dafc8a7-kube-api-access-pm85l\") pod \"logging-loki-gateway-85cf5dc48c-dwk5b\" (UID: \"c26f79ee-1643-4837-a0af-94910dafc8a7\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.891554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwqw\" (UniqueName: \"kubernetes.io/projected/31af895c-b793-4e0f-bae7-031db6fe786f-kube-api-access-snwqw\") pod \"logging-loki-query-frontend-6d6859c548-x4j4k\" (UID: \"31af895c-b793-4e0f-bae7-031db6fe786f\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.961536 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqd7j\" (UniqueName: \"kubernetes.io/projected/95dd962a-260e-4c6d-9e07-c5b99377f3e5-kube-api-access-qqd7j\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.962050 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/95dd962a-260e-4c6d-9e07-c5b99377f3e5-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.962100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-lokistack-gateway\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.962156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/95dd962a-260e-4c6d-9e07-c5b99377f3e5-tls-secret\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.962180 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-rbac\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.962199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/95dd962a-260e-4c6d-9e07-c5b99377f3e5-tenants\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.962247 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.962281 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.963671 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.965343 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-lokistack-gateway\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.965429 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.966532 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/95dd962a-260e-4c6d-9e07-c5b99377f3e5-rbac\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.967285 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/95dd962a-260e-4c6d-9e07-c5b99377f3e5-tls-secret\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.967362 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/95dd962a-260e-4c6d-9e07-c5b99377f3e5-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.968158 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/95dd962a-260e-4c6d-9e07-c5b99377f3e5-tenants\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.971272 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.986470 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqd7j\" (UniqueName: \"kubernetes.io/projected/95dd962a-260e-4c6d-9e07-c5b99377f3e5-kube-api-access-qqd7j\") pod \"logging-loki-gateway-85cf5dc48c-4fck8\" (UID: \"95dd962a-260e-4c6d-9e07-c5b99377f3e5\") " pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:29 crc kubenswrapper[4805]: I0216 21:07:29.999414 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.111514 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.130174 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.247676 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw"] Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.424589 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-5j28p"] Feb 16 21:07:30 crc kubenswrapper[4805]: W0216 21:07:30.434704 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b58cbb2_c2de_4f33_a1ea_344729a67d13.slice/crio-5d66eab1e1f5b28059ce14aa16aae819b7dc9d58b7a6ca9423c23bd1e8fbef12 WatchSource:0}: Error finding container 5d66eab1e1f5b28059ce14aa16aae819b7dc9d58b7a6ca9423c23bd1e8fbef12: Status 404 returned error can't find the container with id 5d66eab1e1f5b28059ce14aa16aae819b7dc9d58b7a6ca9423c23bd1e8fbef12 Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.532312 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k"] Feb 16 21:07:30 crc kubenswrapper[4805]: W0216 21:07:30.536418 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31af895c_b793_4e0f_bae7_031db6fe786f.slice/crio-d708015a5078a849c97efc9d8673d5203c701da7ec3191e2aa2c62df0b0ec9b4 WatchSource:0}: Error finding container d708015a5078a849c97efc9d8673d5203c701da7ec3191e2aa2c62df0b0ec9b4: Status 404 returned error can't find the container with id d708015a5078a849c97efc9d8673d5203c701da7ec3191e2aa2c62df0b0ec9b4 Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.545868 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.547687 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.563696 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.569097 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.569207 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.654463 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b"] Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.658873 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8"] Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.673319 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.673406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.673533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f484d9a6-c5fa-4dc1-80a3-d35f7987927f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f484d9a6-c5fa-4dc1-80a3-d35f7987927f\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.673578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae1f136-5edb-45c8-bba0-e6a50c1f8084-config\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.673597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.673615 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d859b838-52d2-4448-8246-23e527326927\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d859b838-52d2-4448-8246-23e527326927\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.673665 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrcxh\" (UniqueName: \"kubernetes.io/projected/dae1f136-5edb-45c8-bba0-e6a50c1f8084-kube-api-access-hrcxh\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.673690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.680777 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.681704 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.684940 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.686533 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.689731 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.774789 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.774847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f484d9a6-c5fa-4dc1-80a3-d35f7987927f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f484d9a6-c5fa-4dc1-80a3-d35f7987927f\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.774873 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae1f136-5edb-45c8-bba0-e6a50c1f8084-config\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.774894 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.774916 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d859b838-52d2-4448-8246-23e527326927\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d859b838-52d2-4448-8246-23e527326927\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.774968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrcxh\" (UniqueName: \"kubernetes.io/projected/dae1f136-5edb-45c8-bba0-e6a50c1f8084-kube-api-access-hrcxh\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.774988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.775034 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.776081 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae1f136-5edb-45c8-bba0-e6a50c1f8084-config\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.776499 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.778995 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.779044 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f484d9a6-c5fa-4dc1-80a3-d35f7987927f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f484d9a6-c5fa-4dc1-80a3-d35f7987927f\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ebf6ea95a8eb326ab1237a829d0f1e3be38d5babc056dfd42805e3c24589caae/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.779900 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.780028 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d859b838-52d2-4448-8246-23e527326927\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d859b838-52d2-4448-8246-23e527326927\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3255dd94ce46eec12a87e8c6c4dbd29bc76bb260b5ec9ac097093c12302c6823/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.783169 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.783437 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.784864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/dae1f136-5edb-45c8-bba0-e6a50c1f8084-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.945318 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" event={"ID":"31af895c-b793-4e0f-bae7-031db6fe786f","Type":"ContainerStarted","Data":"d708015a5078a849c97efc9d8673d5203c701da7ec3191e2aa2c62df0b0ec9b4"} Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.947123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" event={"ID":"26464e34-2dcc-45f5-a73a-94fd7fa041b8","Type":"ContainerStarted","Data":"6819d7aedd86e81596eb3d5385c872829986f0847c8d82788b3b0c3596682e74"} Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.948494 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" event={"ID":"95dd962a-260e-4c6d-9e07-c5b99377f3e5","Type":"ContainerStarted","Data":"6c218f82b31087d71ad983cdcc9708d5662da33880a66dbab6d9e7f355206e67"} Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.949923 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" event={"ID":"c26f79ee-1643-4837-a0af-94910dafc8a7","Type":"ContainerStarted","Data":"a0d1e0794d6277c32c1ee18495a0b640118e9b644bda756b64214e49de02f37c"} Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.951074 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" event={"ID":"5b58cbb2-c2de-4f33-a1ea-344729a67d13","Type":"ContainerStarted","Data":"5d66eab1e1f5b28059ce14aa16aae819b7dc9d58b7a6ca9423c23bd1e8fbef12"} Feb 16 21:07:30 crc kubenswrapper[4805]: I0216 21:07:30.997196 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrcxh\" (UniqueName: \"kubernetes.io/projected/dae1f136-5edb-45c8-bba0-e6a50c1f8084-kube-api-access-hrcxh\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.028039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d859b838-52d2-4448-8246-23e527326927\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d859b838-52d2-4448-8246-23e527326927\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.056492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f484d9a6-c5fa-4dc1-80a3-d35f7987927f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f484d9a6-c5fa-4dc1-80a3-d35f7987927f\") pod \"logging-loki-ingester-0\" (UID: \"dae1f136-5edb-45c8-bba0-e6a50c1f8084\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.079355 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.079703 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.079827 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d94ec43e-3a93-49d2-aa96-c781442f21cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.079941 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-035898dc-d756-4756-a0f5-10c7c7a25333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-035898dc-d756-4756-a0f5-10c7c7a25333\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.080066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.080156 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.080287 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6wl\" (UniqueName: \"kubernetes.io/projected/d94ec43e-3a93-49d2-aa96-c781442f21cd-kube-api-access-dj6wl\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.181056 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6wl\" (UniqueName: \"kubernetes.io/projected/d94ec43e-3a93-49d2-aa96-c781442f21cd-kube-api-access-dj6wl\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.181115 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.181164 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.181181 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d94ec43e-3a93-49d2-aa96-c781442f21cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.181208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-035898dc-d756-4756-a0f5-10c7c7a25333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-035898dc-d756-4756-a0f5-10c7c7a25333\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.181234 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.181251 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.182825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d94ec43e-3a93-49d2-aa96-c781442f21cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.184021 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.184123 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-035898dc-d756-4756-a0f5-10c7c7a25333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-035898dc-d756-4756-a0f5-10c7c7a25333\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8de0bca0bad57893135a982d4be0a8ada88142f9f2b7d4069ac446b399b56561/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.185055 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.185083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.186039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.188550 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d94ec43e-3a93-49d2-aa96-c781442f21cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.196442 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6wl\" (UniqueName: \"kubernetes.io/projected/d94ec43e-3a93-49d2-aa96-c781442f21cd-kube-api-access-dj6wl\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.210448 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-035898dc-d756-4756-a0f5-10c7c7a25333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-035898dc-d756-4756-a0f5-10c7c7a25333\") pod \"logging-loki-index-gateway-0\" (UID: \"d94ec43e-3a93-49d2-aa96-c781442f21cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.226974 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.359637 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.625047 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.630188 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 21:07:31 crc kubenswrapper[4805]: W0216 21:07:31.642076 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd94ec43e_3a93_49d2_aa96_c781442f21cd.slice/crio-cf83302e18177366eaa1cc451854f9118b36f053e466a21174d3b936dbdd2e33 WatchSource:0}: Error finding container cf83302e18177366eaa1cc451854f9118b36f053e466a21174d3b936dbdd2e33: Status 404 returned error can't find the container with id cf83302e18177366eaa1cc451854f9118b36f053e466a21174d3b936dbdd2e33 Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.960806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"d94ec43e-3a93-49d2-aa96-c781442f21cd","Type":"ContainerStarted","Data":"cf83302e18177366eaa1cc451854f9118b36f053e466a21174d3b936dbdd2e33"} Feb 16 21:07:31 crc kubenswrapper[4805]: I0216 21:07:31.962108 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"dae1f136-5edb-45c8-bba0-e6a50c1f8084","Type":"ContainerStarted","Data":"4d435fdaf5cc5d1a90a2e5e1151d1fa06d96f4b95048881d54bb6bda2d23882d"} Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.093154 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.100966 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.122063 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.123161 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.129847 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.196898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lzhh\" (UniqueName: \"kubernetes.io/projected/446ae7b1-0d4c-43f4-9580-8b0472211510-kube-api-access-5lzhh\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.196959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ca76026d-223f-41d0-a987-25f1f4151e96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca76026d-223f-41d0-a987-25f1f4151e96\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.197003 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446ae7b1-0d4c-43f4-9580-8b0472211510-config\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.197026 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.197055 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.197073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.197095 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.298246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lzhh\" (UniqueName: \"kubernetes.io/projected/446ae7b1-0d4c-43f4-9580-8b0472211510-kube-api-access-5lzhh\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.298292 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ca76026d-223f-41d0-a987-25f1f4151e96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca76026d-223f-41d0-a987-25f1f4151e96\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.298312 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446ae7b1-0d4c-43f4-9580-8b0472211510-config\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.298332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.298360 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.298377 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.298691 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.300570 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446ae7b1-0d4c-43f4-9580-8b0472211510-config\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.301752 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.304062 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.304103 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ca76026d-223f-41d0-a987-25f1f4151e96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca76026d-223f-41d0-a987-25f1f4151e96\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/76ae71c139c20e8ed733bb90b89ba58e4e19420bee6fb975299d2753dc8ea4c5/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.305811 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.320480 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.324649 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lzhh\" (UniqueName: \"kubernetes.io/projected/446ae7b1-0d4c-43f4-9580-8b0472211510-kube-api-access-5lzhh\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.328377 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/446ae7b1-0d4c-43f4-9580-8b0472211510-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.340272 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ca76026d-223f-41d0-a987-25f1f4151e96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca76026d-223f-41d0-a987-25f1f4151e96\") pod \"logging-loki-compactor-0\" (UID: \"446ae7b1-0d4c-43f4-9580-8b0472211510\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:32 crc kubenswrapper[4805]: I0216 21:07:32.441365 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:34 crc kubenswrapper[4805]: I0216 21:07:34.008217 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 21:07:34 crc kubenswrapper[4805]: I0216 21:07:34.997642 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" event={"ID":"c26f79ee-1643-4837-a0af-94910dafc8a7","Type":"ContainerStarted","Data":"5d4f0910b80bebc603921956f05cb5c29fedec9c95f031b012551ead93872125"} Feb 16 21:07:34 crc kubenswrapper[4805]: I0216 21:07:34.999442 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" event={"ID":"5b58cbb2-c2de-4f33-a1ea-344729a67d13","Type":"ContainerStarted","Data":"7e70b120e94712c132f5bfeef17963ec80751c97d4e55acbee73a606c125197e"} Feb 16 21:07:34 crc kubenswrapper[4805]: I0216 21:07:34.999528 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.001462 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" event={"ID":"31af895c-b793-4e0f-bae7-031db6fe786f","Type":"ContainerStarted","Data":"02c787519946db2d684a0520ffe22f77beeb5cd827f718908b4798a591ef767e"} Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.001642 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.004172 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" event={"ID":"26464e34-2dcc-45f5-a73a-94fd7fa041b8","Type":"ContainerStarted","Data":"6d0d809dff61fb18bb825e000b5a709618ccd55e9b3efb55cd071efa383636b5"} Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.005414 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.008153 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" event={"ID":"95dd962a-260e-4c6d-9e07-c5b99377f3e5","Type":"ContainerStarted","Data":"0e2ea28be48c9ee389602c29bdcf7c38dff0184d680f4ff188a65ec27c0c6302"} Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.013235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"d94ec43e-3a93-49d2-aa96-c781442f21cd","Type":"ContainerStarted","Data":"c52c99d060f08ddc338e73e64fb3966351b010bed878720f298b76919d10ec98"} Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.013385 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.015189 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"446ae7b1-0d4c-43f4-9580-8b0472211510","Type":"ContainerStarted","Data":"e1ba1ae77ef214d717d63e138348bf322bf9208649158a84c41ed6fdf32954d8"} Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.015225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"446ae7b1-0d4c-43f4-9580-8b0472211510","Type":"ContainerStarted","Data":"96e518bea001fc2032c3d959a1cf9f52d61a54edcb4e697317118331bccc6a2e"} Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.015396 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.018877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"dae1f136-5edb-45c8-bba0-e6a50c1f8084","Type":"ContainerStarted","Data":"6421ce690594b2ebcfcbaed311e97805c3e3a2659a82a74052513a17075fd4ef"} Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.019534 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.059065 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.886678135 podStartE2EDuration="6.059036647s" podCreationTimestamp="2026-02-16 21:07:29 +0000 UTC" firstStartedPulling="2026-02-16 21:07:31.64491035 +0000 UTC m=+669.463593645" lastFinishedPulling="2026-02-16 21:07:33.817268862 +0000 UTC m=+671.635952157" observedRunningTime="2026-02-16 21:07:35.052537338 +0000 UTC m=+672.871220673" watchObservedRunningTime="2026-02-16 21:07:35.059036647 +0000 UTC m=+672.877719982" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.064162 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" podStartSLOduration=2.682028355 podStartE2EDuration="6.064141209s" podCreationTimestamp="2026-02-16 21:07:29 +0000 UTC" firstStartedPulling="2026-02-16 21:07:30.436548325 +0000 UTC m=+668.255231610" lastFinishedPulling="2026-02-16 21:07:33.818661169 +0000 UTC m=+671.637344464" observedRunningTime="2026-02-16 21:07:35.025375171 +0000 UTC m=+672.844058526" watchObservedRunningTime="2026-02-16 21:07:35.064141209 +0000 UTC m=+672.882824544" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.082965 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=6.08294606 podStartE2EDuration="6.08294606s" podCreationTimestamp="2026-02-16 21:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:07:35.079948471 +0000 UTC m=+672.898631806" watchObservedRunningTime="2026-02-16 21:07:35.08294606 +0000 UTC m=+672.901629365" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.105483 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.923658276 podStartE2EDuration="6.105461576s" podCreationTimestamp="2026-02-16 21:07:29 +0000 UTC" firstStartedPulling="2026-02-16 21:07:31.631590075 +0000 UTC m=+669.450273360" lastFinishedPulling="2026-02-16 21:07:33.813393365 +0000 UTC m=+671.632076660" observedRunningTime="2026-02-16 21:07:35.103400406 +0000 UTC m=+672.922083711" watchObservedRunningTime="2026-02-16 21:07:35.105461576 +0000 UTC m=+672.924144891" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.151606 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" podStartSLOduration=2.617810853 podStartE2EDuration="6.151582539s" podCreationTimestamp="2026-02-16 21:07:29 +0000 UTC" firstStartedPulling="2026-02-16 21:07:30.279622879 +0000 UTC m=+668.098306174" lastFinishedPulling="2026-02-16 21:07:33.813394555 +0000 UTC m=+671.632077860" observedRunningTime="2026-02-16 21:07:35.129492511 +0000 UTC m=+672.948175866" watchObservedRunningTime="2026-02-16 21:07:35.151582539 +0000 UTC m=+672.970265854" Feb 16 21:07:35 crc kubenswrapper[4805]: I0216 21:07:35.153565 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" podStartSLOduration=2.9256946360000002 podStartE2EDuration="6.153551438s" podCreationTimestamp="2026-02-16 21:07:29 +0000 UTC" firstStartedPulling="2026-02-16 21:07:30.542895649 +0000 UTC m=+668.361578944" lastFinishedPulling="2026-02-16 21:07:33.770752451 +0000 UTC m=+671.589435746" observedRunningTime="2026-02-16 21:07:35.145891566 +0000 UTC m=+672.964574901" watchObservedRunningTime="2026-02-16 21:07:35.153551438 +0000 UTC m=+672.972234743" Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.040946 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" event={"ID":"95dd962a-260e-4c6d-9e07-c5b99377f3e5","Type":"ContainerStarted","Data":"6c4aa77078067e7b5c7369a0ac2535987edf33fee105e5a74520d8a85a84d40f"} Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.041328 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.042087 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.043416 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" event={"ID":"c26f79ee-1643-4837-a0af-94910dafc8a7","Type":"ContainerStarted","Data":"2ab8b36b11a3a105c5d4e581d924f69decf33a3e1b35e15f698507d98c1eba49"} Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.044481 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.056968 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.058622 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.060835 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.076803 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-4fck8" podStartSLOduration=2.564656561 podStartE2EDuration="8.07678146s" podCreationTimestamp="2026-02-16 21:07:29 +0000 UTC" firstStartedPulling="2026-02-16 21:07:30.66165878 +0000 UTC m=+668.480342075" lastFinishedPulling="2026-02-16 21:07:36.173783679 +0000 UTC m=+673.992466974" observedRunningTime="2026-02-16 21:07:37.070356453 +0000 UTC m=+674.889039748" watchObservedRunningTime="2026-02-16 21:07:37.07678146 +0000 UTC m=+674.895464755" Feb 16 21:07:37 crc kubenswrapper[4805]: I0216 21:07:37.112036 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" podStartSLOduration=2.605322646 podStartE2EDuration="8.112008457s" podCreationTimestamp="2026-02-16 21:07:29 +0000 UTC" firstStartedPulling="2026-02-16 21:07:30.657333024 +0000 UTC m=+668.476016319" lastFinishedPulling="2026-02-16 21:07:36.164018825 +0000 UTC m=+673.982702130" observedRunningTime="2026-02-16 21:07:37.099509719 +0000 UTC m=+674.918193014" watchObservedRunningTime="2026-02-16 21:07:37.112008457 +0000 UTC m=+674.930691792" Feb 16 21:07:38 crc kubenswrapper[4805]: I0216 21:07:38.051166 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:38 crc kubenswrapper[4805]: I0216 21:07:38.059270 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85cf5dc48c-dwk5b" Feb 16 21:07:49 crc kubenswrapper[4805]: I0216 21:07:49.734116 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-4rxpw" Feb 16 21:07:49 crc kubenswrapper[4805]: I0216 21:07:49.977374 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5j28p" Feb 16 21:07:50 crc kubenswrapper[4805]: I0216 21:07:50.021351 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-x4j4k" Feb 16 21:07:51 crc kubenswrapper[4805]: I0216 21:07:51.233388 4805 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 21:07:51 crc kubenswrapper[4805]: I0216 21:07:51.233451 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="dae1f136-5edb-45c8-bba0-e6a50c1f8084" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:07:51 crc kubenswrapper[4805]: I0216 21:07:51.367780 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:07:52 crc kubenswrapper[4805]: I0216 21:07:52.451471 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:08:01 crc kubenswrapper[4805]: I0216 21:08:01.253598 4805 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 21:08:01 crc kubenswrapper[4805]: I0216 21:08:01.254267 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="dae1f136-5edb-45c8-bba0-e6a50c1f8084" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:08:08 crc kubenswrapper[4805]: I0216 21:08:08.100254 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:08:08 crc kubenswrapper[4805]: I0216 21:08:08.100966 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:08:11 crc kubenswrapper[4805]: I0216 21:08:11.230827 4805 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 21:08:11 crc kubenswrapper[4805]: I0216 21:08:11.231172 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="dae1f136-5edb-45c8-bba0-e6a50c1f8084" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:08:21 crc kubenswrapper[4805]: I0216 21:08:21.235066 4805 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 21:08:21 crc kubenswrapper[4805]: I0216 21:08:21.235924 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="dae1f136-5edb-45c8-bba0-e6a50c1f8084" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:08:31 crc kubenswrapper[4805]: I0216 21:08:31.234136 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:08:38 crc kubenswrapper[4805]: I0216 21:08:38.100270 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:08:38 crc kubenswrapper[4805]: I0216 21:08:38.101194 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.817881 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-zc5gw"] Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.819907 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.823188 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.824742 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.824870 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-2pbdp" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.825032 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.825881 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.841204 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.852566 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-zc5gw"] Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.918946 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-zc5gw"] Feb 16 21:08:47 crc kubenswrapper[4805]: E0216 21:08:47.919483 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-kn2bp metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-zc5gw" podUID="3a27846a-1db8-4b41-aff3-3c8a79170220" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953635 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-token\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-entrypoint\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953769 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3a27846a-1db8-4b41-aff3-3c8a79170220-datadir\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953822 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config-openshift-service-cacrt\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-metrics\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-sa-token\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-trusted-ca\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953943 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn2bp\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-kube-api-access-kn2bp\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.953981 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a27846a-1db8-4b41-aff3-3c8a79170220-tmp\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:47 crc kubenswrapper[4805]: I0216 21:08:47.954000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-syslog-receiver\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.055765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-token\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.055808 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-entrypoint\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.055844 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3a27846a-1db8-4b41-aff3-3c8a79170220-datadir\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.055886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config-openshift-service-cacrt\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.055901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-metrics\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.055926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-sa-token\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.055963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-trusted-ca\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.056019 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3a27846a-1db8-4b41-aff3-3c8a79170220-datadir\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.056059 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn2bp\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-kube-api-access-kn2bp\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: E0216 21:08:48.056180 4805 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 16 21:08:48 crc kubenswrapper[4805]: E0216 21:08:48.056257 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-metrics podName:3a27846a-1db8-4b41-aff3-3c8a79170220 nodeName:}" failed. No retries permitted until 2026-02-16 21:08:48.556235331 +0000 UTC m=+746.374918626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-metrics") pod "collector-zc5gw" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220") : secret "collector-metrics" not found Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.056368 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.056400 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a27846a-1db8-4b41-aff3-3c8a79170220-tmp\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.056420 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-syslog-receiver\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.057212 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config-openshift-service-cacrt\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.057248 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-entrypoint\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.057344 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.057354 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-trusted-ca\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.060864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a27846a-1db8-4b41-aff3-3c8a79170220-tmp\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.067443 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-token\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.074113 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-sa-token\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.075306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn2bp\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-kube-api-access-kn2bp\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.085294 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-syslog-receiver\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.564922 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-metrics\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.574518 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-metrics\") pod \"collector-zc5gw\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.733057 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.751462 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zc5gw" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.870334 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn2bp\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-kube-api-access-kn2bp\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.871214 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3a27846a-1db8-4b41-aff3-3c8a79170220-datadir\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.871325 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a27846a-1db8-4b41-aff3-3c8a79170220-datadir" (OuterVolumeSpecName: "datadir") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.871462 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config-openshift-service-cacrt\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.871604 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-token\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.871707 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-trusted-ca\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.871871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-metrics\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.871998 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-entrypoint\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.872109 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.872214 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-sa-token\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.872334 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a27846a-1db8-4b41-aff3-3c8a79170220-tmp\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.872479 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-syslog-receiver\") pod \"3a27846a-1db8-4b41-aff3-3c8a79170220\" (UID: \"3a27846a-1db8-4b41-aff3-3c8a79170220\") " Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.872980 4805 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3a27846a-1db8-4b41-aff3-3c8a79170220-datadir\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.874020 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.874110 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.874369 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config" (OuterVolumeSpecName: "config") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.874461 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.876510 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-kube-api-access-kn2bp" (OuterVolumeSpecName: "kube-api-access-kn2bp") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "kube-api-access-kn2bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.877173 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a27846a-1db8-4b41-aff3-3c8a79170220-tmp" (OuterVolumeSpecName: "tmp") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.877793 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-metrics" (OuterVolumeSpecName: "metrics") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.879910 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.880123 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-sa-token" (OuterVolumeSpecName: "sa-token") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.883945 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-token" (OuterVolumeSpecName: "collector-token") pod "3a27846a-1db8-4b41-aff3-3c8a79170220" (UID: "3a27846a-1db8-4b41-aff3-3c8a79170220"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975050 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975095 4805 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975112 4805 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a27846a-1db8-4b41-aff3-3c8a79170220-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975125 4805 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975142 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn2bp\" (UniqueName: \"kubernetes.io/projected/3a27846a-1db8-4b41-aff3-3c8a79170220-kube-api-access-kn2bp\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975153 4805 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975164 4805 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-collector-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975177 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975186 4805 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3a27846a-1db8-4b41-aff3-3c8a79170220-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:48 crc kubenswrapper[4805]: I0216 21:08:48.975196 4805 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3a27846a-1db8-4b41-aff3-3c8a79170220-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.741816 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zc5gw" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.810418 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-zc5gw"] Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.822264 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-zc5gw"] Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.830511 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-5cssm"] Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.831932 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.835700 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-2pbdp" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.836335 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.836469 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.837915 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.838025 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.843916 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5cssm"] Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.879372 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.994512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-collector-token\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.994692 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-entrypoint\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.994793 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz66b\" (UniqueName: \"kubernetes.io/projected/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-kube-api-access-kz66b\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.994868 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-sa-token\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.995051 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-datadir\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.995116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-tmp\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.995201 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-config-openshift-service-cacrt\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.995317 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-trusted-ca\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.995363 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-config\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.995394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-collector-syslog-receiver\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:49 crc kubenswrapper[4805]: I0216 21:08:49.995530 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-metrics\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.097604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-entrypoint\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.097671 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz66b\" (UniqueName: \"kubernetes.io/projected/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-kube-api-access-kz66b\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.097750 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-sa-token\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.097809 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-datadir\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.097842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-tmp\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.097877 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-config-openshift-service-cacrt\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.097928 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-trusted-ca\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.097962 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-config\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.097998 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-collector-syslog-receiver\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.098237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-metrics\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.098313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-collector-token\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.100013 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-entrypoint\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.100133 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-datadir\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.100845 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-trusted-ca\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.101293 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-config-openshift-service-cacrt\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.103254 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-config\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.104994 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-tmp\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.106242 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-collector-token\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.106282 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-metrics\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.107559 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-collector-syslog-receiver\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.127526 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-sa-token\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.136536 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz66b\" (UniqueName: \"kubernetes.io/projected/37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a-kube-api-access-kz66b\") pod \"collector-5cssm\" (UID: \"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a\") " pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.202499 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5cssm" Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.674669 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5cssm"] Feb 16 21:08:50 crc kubenswrapper[4805]: I0216 21:08:50.753069 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5cssm" event={"ID":"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a","Type":"ContainerStarted","Data":"97f453db7362a538553efbc522f160510183abac390770296724fe44793b4c2d"} Feb 16 21:08:51 crc kubenswrapper[4805]: I0216 21:08:51.616105 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a27846a-1db8-4b41-aff3-3c8a79170220" path="/var/lib/kubelet/pods/3a27846a-1db8-4b41-aff3-3c8a79170220/volumes" Feb 16 21:08:57 crc kubenswrapper[4805]: I0216 21:08:57.827118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5cssm" event={"ID":"37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a","Type":"ContainerStarted","Data":"da739f884a437a14754f17732851f5cc79d11368ce1db12ba8c1bf94b9fb7536"} Feb 16 21:08:57 crc kubenswrapper[4805]: I0216 21:08:57.876813 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-5cssm" podStartSLOduration=2.412761008 podStartE2EDuration="8.873323152s" podCreationTimestamp="2026-02-16 21:08:49 +0000 UTC" firstStartedPulling="2026-02-16 21:08:50.687260039 +0000 UTC m=+748.505943374" lastFinishedPulling="2026-02-16 21:08:57.147822223 +0000 UTC m=+754.966505518" observedRunningTime="2026-02-16 21:08:57.861231692 +0000 UTC m=+755.679915017" watchObservedRunningTime="2026-02-16 21:08:57.873323152 +0000 UTC m=+755.692006497" Feb 16 21:09:01 crc kubenswrapper[4805]: I0216 21:09:01.596878 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 21:09:08 crc kubenswrapper[4805]: I0216 21:09:08.100299 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:09:08 crc kubenswrapper[4805]: I0216 21:09:08.100840 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:09:08 crc kubenswrapper[4805]: I0216 21:09:08.100894 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:09:08 crc kubenswrapper[4805]: I0216 21:09:08.101687 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ea6a527da3d45efcd7fbad2ab314c9a6cf5f646dedd04c29a2b897c9c0a84d1"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:09:08 crc kubenswrapper[4805]: I0216 21:09:08.101788 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://2ea6a527da3d45efcd7fbad2ab314c9a6cf5f646dedd04c29a2b897c9c0a84d1" gracePeriod=600 Feb 16 21:09:08 crc kubenswrapper[4805]: I0216 21:09:08.934830 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="2ea6a527da3d45efcd7fbad2ab314c9a6cf5f646dedd04c29a2b897c9c0a84d1" exitCode=0 Feb 16 21:09:08 crc kubenswrapper[4805]: I0216 21:09:08.934880 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"2ea6a527da3d45efcd7fbad2ab314c9a6cf5f646dedd04c29a2b897c9c0a84d1"} Feb 16 21:09:08 crc kubenswrapper[4805]: I0216 21:09:08.935298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"e746550f7cf0d50be9739ce7e97b17ef93c5c8ee315aa0d1535183b0c6cfe9db"} Feb 16 21:09:08 crc kubenswrapper[4805]: I0216 21:09:08.935322 4805 scope.go:117] "RemoveContainer" containerID="ceb0b80c1374cd4ccb9dd0d277e234416e87b41fce8125fe9f568455202c275d" Feb 16 21:09:28 crc kubenswrapper[4805]: I0216 21:09:28.921235 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr"] Feb 16 21:09:28 crc kubenswrapper[4805]: I0216 21:09:28.923860 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:28 crc kubenswrapper[4805]: I0216 21:09:28.927370 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:09:28 crc kubenswrapper[4805]: I0216 21:09:28.936826 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr"] Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.025587 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.025652 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h887h\" (UniqueName: \"kubernetes.io/projected/8ad99527-987d-443d-b6c4-4abc6fd5fd72-kube-api-access-h887h\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.025691 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.127599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.127677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h887h\" (UniqueName: \"kubernetes.io/projected/8ad99527-987d-443d-b6c4-4abc6fd5fd72-kube-api-access-h887h\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.127712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.128143 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.128250 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.159436 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h887h\" (UniqueName: \"kubernetes.io/projected/8ad99527-987d-443d-b6c4-4abc6fd5fd72-kube-api-access-h887h\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.289797 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:29 crc kubenswrapper[4805]: I0216 21:09:29.560215 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr"] Feb 16 21:09:30 crc kubenswrapper[4805]: I0216 21:09:30.144268 4805 generic.go:334] "Generic (PLEG): container finished" podID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerID="198926035befc39200bddfdb378134a7128c18f4104884175941e68e0ef44847" exitCode=0 Feb 16 21:09:30 crc kubenswrapper[4805]: I0216 21:09:30.144321 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" event={"ID":"8ad99527-987d-443d-b6c4-4abc6fd5fd72","Type":"ContainerDied","Data":"198926035befc39200bddfdb378134a7128c18f4104884175941e68e0ef44847"} Feb 16 21:09:30 crc kubenswrapper[4805]: I0216 21:09:30.144637 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" event={"ID":"8ad99527-987d-443d-b6c4-4abc6fd5fd72","Type":"ContainerStarted","Data":"84a240c5ccef2279622b905113bb88b82cf23e6422c3276818ddc3bcb6e0c420"} Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.273816 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jnd75"] Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.275702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.284143 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jnd75"] Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.362953 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm625\" (UniqueName: \"kubernetes.io/projected/b2729e04-0a2e-4121-bbfe-50486750b2d4-kube-api-access-lm625\") pod \"redhat-operators-jnd75\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.363363 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-utilities\") pod \"redhat-operators-jnd75\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.363399 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-catalog-content\") pod \"redhat-operators-jnd75\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.464577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm625\" (UniqueName: \"kubernetes.io/projected/b2729e04-0a2e-4121-bbfe-50486750b2d4-kube-api-access-lm625\") pod \"redhat-operators-jnd75\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.464628 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-utilities\") pod \"redhat-operators-jnd75\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.464658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-catalog-content\") pod \"redhat-operators-jnd75\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.465263 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-catalog-content\") pod \"redhat-operators-jnd75\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.465333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-utilities\") pod \"redhat-operators-jnd75\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.484960 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm625\" (UniqueName: \"kubernetes.io/projected/b2729e04-0a2e-4121-bbfe-50486750b2d4-kube-api-access-lm625\") pod \"redhat-operators-jnd75\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:31 crc kubenswrapper[4805]: I0216 21:09:31.652667 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:32 crc kubenswrapper[4805]: I0216 21:09:32.095568 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jnd75"] Feb 16 21:09:32 crc kubenswrapper[4805]: I0216 21:09:32.161571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnd75" event={"ID":"b2729e04-0a2e-4121-bbfe-50486750b2d4","Type":"ContainerStarted","Data":"e87db1109348a9eb70b566965f327075b3e50c30d93656082a7da2c9bf39cd96"} Feb 16 21:09:32 crc kubenswrapper[4805]: I0216 21:09:32.163086 4805 generic.go:334] "Generic (PLEG): container finished" podID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerID="7839327d6516c124623cd7e72910e923a24b5d1dab009b2b28b6089aef9f275e" exitCode=0 Feb 16 21:09:32 crc kubenswrapper[4805]: I0216 21:09:32.163147 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" event={"ID":"8ad99527-987d-443d-b6c4-4abc6fd5fd72","Type":"ContainerDied","Data":"7839327d6516c124623cd7e72910e923a24b5d1dab009b2b28b6089aef9f275e"} Feb 16 21:09:33 crc kubenswrapper[4805]: I0216 21:09:33.172087 4805 generic.go:334] "Generic (PLEG): container finished" podID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerID="1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515" exitCode=0 Feb 16 21:09:33 crc kubenswrapper[4805]: I0216 21:09:33.172194 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnd75" event={"ID":"b2729e04-0a2e-4121-bbfe-50486750b2d4","Type":"ContainerDied","Data":"1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515"} Feb 16 21:09:33 crc kubenswrapper[4805]: I0216 21:09:33.175883 4805 generic.go:334] "Generic (PLEG): container finished" podID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerID="ee0e2c89031ead7619885602fa9274ab546e829406b6c8a69683b1cb81b61e72" exitCode=0 Feb 16 21:09:33 crc kubenswrapper[4805]: I0216 21:09:33.175918 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" event={"ID":"8ad99527-987d-443d-b6c4-4abc6fd5fd72","Type":"ContainerDied","Data":"ee0e2c89031ead7619885602fa9274ab546e829406b6c8a69683b1cb81b61e72"} Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.186336 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnd75" event={"ID":"b2729e04-0a2e-4121-bbfe-50486750b2d4","Type":"ContainerStarted","Data":"553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419"} Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.477955 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.514668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-bundle\") pod \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.514738 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-util\") pod \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.514804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h887h\" (UniqueName: \"kubernetes.io/projected/8ad99527-987d-443d-b6c4-4abc6fd5fd72-kube-api-access-h887h\") pod \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\" (UID: \"8ad99527-987d-443d-b6c4-4abc6fd5fd72\") " Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.515644 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-bundle" (OuterVolumeSpecName: "bundle") pod "8ad99527-987d-443d-b6c4-4abc6fd5fd72" (UID: "8ad99527-987d-443d-b6c4-4abc6fd5fd72"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.528878 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad99527-987d-443d-b6c4-4abc6fd5fd72-kube-api-access-h887h" (OuterVolumeSpecName: "kube-api-access-h887h") pod "8ad99527-987d-443d-b6c4-4abc6fd5fd72" (UID: "8ad99527-987d-443d-b6c4-4abc6fd5fd72"). InnerVolumeSpecName "kube-api-access-h887h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.541898 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-util" (OuterVolumeSpecName: "util") pod "8ad99527-987d-443d-b6c4-4abc6fd5fd72" (UID: "8ad99527-987d-443d-b6c4-4abc6fd5fd72"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.616656 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.616708 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ad99527-987d-443d-b6c4-4abc6fd5fd72-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:09:34 crc kubenswrapper[4805]: I0216 21:09:34.616746 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h887h\" (UniqueName: \"kubernetes.io/projected/8ad99527-987d-443d-b6c4-4abc6fd5fd72-kube-api-access-h887h\") on node \"crc\" DevicePath \"\"" Feb 16 21:09:35 crc kubenswrapper[4805]: I0216 21:09:35.196973 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" event={"ID":"8ad99527-987d-443d-b6c4-4abc6fd5fd72","Type":"ContainerDied","Data":"84a240c5ccef2279622b905113bb88b82cf23e6422c3276818ddc3bcb6e0c420"} Feb 16 21:09:35 crc kubenswrapper[4805]: I0216 21:09:35.197014 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a240c5ccef2279622b905113bb88b82cf23e6422c3276818ddc3bcb6e0c420" Feb 16 21:09:35 crc kubenswrapper[4805]: I0216 21:09:35.197025 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr" Feb 16 21:09:35 crc kubenswrapper[4805]: I0216 21:09:35.199747 4805 generic.go:334] "Generic (PLEG): container finished" podID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerID="553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419" exitCode=0 Feb 16 21:09:35 crc kubenswrapper[4805]: I0216 21:09:35.199790 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnd75" event={"ID":"b2729e04-0a2e-4121-bbfe-50486750b2d4","Type":"ContainerDied","Data":"553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419"} Feb 16 21:09:36 crc kubenswrapper[4805]: I0216 21:09:36.209239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnd75" event={"ID":"b2729e04-0a2e-4121-bbfe-50486750b2d4","Type":"ContainerStarted","Data":"d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb"} Feb 16 21:09:36 crc kubenswrapper[4805]: I0216 21:09:36.258268 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jnd75" podStartSLOduration=2.837915299 podStartE2EDuration="5.258234616s" podCreationTimestamp="2026-02-16 21:09:31 +0000 UTC" firstStartedPulling="2026-02-16 21:09:33.174254934 +0000 UTC m=+790.992938239" lastFinishedPulling="2026-02-16 21:09:35.594574261 +0000 UTC m=+793.413257556" observedRunningTime="2026-02-16 21:09:36.252260919 +0000 UTC m=+794.070944234" watchObservedRunningTime="2026-02-16 21:09:36.258234616 +0000 UTC m=+794.076917951" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.693213 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-js2b9"] Feb 16 21:09:38 crc kubenswrapper[4805]: E0216 21:09:38.694059 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerName="pull" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.694089 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerName="pull" Feb 16 21:09:38 crc kubenswrapper[4805]: E0216 21:09:38.694120 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerName="extract" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.694137 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerName="extract" Feb 16 21:09:38 crc kubenswrapper[4805]: E0216 21:09:38.694191 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerName="util" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.694211 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerName="util" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.694473 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad99527-987d-443d-b6c4-4abc6fd5fd72" containerName="extract" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.695303 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-js2b9" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.697554 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-75mrc" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.697627 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.697890 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.700917 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-js2b9"] Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.786485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2s8\" (UniqueName: \"kubernetes.io/projected/79b41a3f-fa2e-4d38-872b-17744c7ef23e-kube-api-access-7z2s8\") pod \"nmstate-operator-694c9596b7-js2b9\" (UID: \"79b41a3f-fa2e-4d38-872b-17744c7ef23e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-js2b9" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.887944 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z2s8\" (UniqueName: \"kubernetes.io/projected/79b41a3f-fa2e-4d38-872b-17744c7ef23e-kube-api-access-7z2s8\") pod \"nmstate-operator-694c9596b7-js2b9\" (UID: \"79b41a3f-fa2e-4d38-872b-17744c7ef23e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-js2b9" Feb 16 21:09:38 crc kubenswrapper[4805]: I0216 21:09:38.914559 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z2s8\" (UniqueName: \"kubernetes.io/projected/79b41a3f-fa2e-4d38-872b-17744c7ef23e-kube-api-access-7z2s8\") pod \"nmstate-operator-694c9596b7-js2b9\" (UID: \"79b41a3f-fa2e-4d38-872b-17744c7ef23e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-js2b9" Feb 16 21:09:39 crc kubenswrapper[4805]: I0216 21:09:39.015040 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-js2b9" Feb 16 21:09:39 crc kubenswrapper[4805]: I0216 21:09:39.468514 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-js2b9"] Feb 16 21:09:40 crc kubenswrapper[4805]: I0216 21:09:40.239990 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-js2b9" event={"ID":"79b41a3f-fa2e-4d38-872b-17744c7ef23e","Type":"ContainerStarted","Data":"e6b4ec7f2e2f3be8e8c741a90c46b5a51d09da80dd0a1512e3df4a49c9845805"} Feb 16 21:09:41 crc kubenswrapper[4805]: I0216 21:09:41.653604 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:41 crc kubenswrapper[4805]: I0216 21:09:41.653676 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:42 crc kubenswrapper[4805]: I0216 21:09:42.713246 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jnd75" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerName="registry-server" probeResult="failure" output=< Feb 16 21:09:42 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:09:42 crc kubenswrapper[4805]: > Feb 16 21:09:44 crc kubenswrapper[4805]: I0216 21:09:44.273425 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-js2b9" event={"ID":"79b41a3f-fa2e-4d38-872b-17744c7ef23e","Type":"ContainerStarted","Data":"043a9b49ed93eedea3799a8e03b23dea2207fb266b3ffbf2bce0b09b3b598a69"} Feb 16 21:09:44 crc kubenswrapper[4805]: I0216 21:09:44.299242 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-js2b9" podStartSLOduration=2.538775314 podStartE2EDuration="6.299223906s" podCreationTimestamp="2026-02-16 21:09:38 +0000 UTC" firstStartedPulling="2026-02-16 21:09:39.47636453 +0000 UTC m=+797.295047835" lastFinishedPulling="2026-02-16 21:09:43.236813132 +0000 UTC m=+801.055496427" observedRunningTime="2026-02-16 21:09:44.295905869 +0000 UTC m=+802.114589164" watchObservedRunningTime="2026-02-16 21:09:44.299223906 +0000 UTC m=+802.117907201" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.277617 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj"] Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.278805 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.281943 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mksr6" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.289771 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x"] Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.290955 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.331773 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.339810 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj"] Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.345478 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-x5dj7"] Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.347269 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.352735 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x"] Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.393527 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/91f381c7-9be0-48df-8f9d-1a708710e670-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nsw6x\" (UID: \"91f381c7-9be0-48df-8f9d-1a708710e670\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.393573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqddc\" (UniqueName: \"kubernetes.io/projected/f676a01e-1cc1-482c-933b-5312fed324e2-kube-api-access-kqddc\") pod \"nmstate-metrics-58c85c668d-mbrjj\" (UID: \"f676a01e-1cc1-482c-933b-5312fed324e2\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.393594 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-dbus-socket\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.393618 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbmpw\" (UniqueName: \"kubernetes.io/projected/91f381c7-9be0-48df-8f9d-1a708710e670-kube-api-access-gbmpw\") pod \"nmstate-webhook-866bcb46dc-nsw6x\" (UID: \"91f381c7-9be0-48df-8f9d-1a708710e670\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.394083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fml6x\" (UniqueName: \"kubernetes.io/projected/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-kube-api-access-fml6x\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.394143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-ovs-socket\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.394215 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-nmstate-lock\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.431014 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth"] Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.432101 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.435971 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.436005 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.436044 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-b9rfb" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.448039 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth"] Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496039 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4f9115b9-3cc9-44f1-bf72-3141429a5001-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-grmth\" (UID: \"4f9115b9-3cc9-44f1-bf72-3141429a5001\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496085 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbmpw\" (UniqueName: \"kubernetes.io/projected/91f381c7-9be0-48df-8f9d-1a708710e670-kube-api-access-gbmpw\") pod \"nmstate-webhook-866bcb46dc-nsw6x\" (UID: \"91f381c7-9be0-48df-8f9d-1a708710e670\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496113 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9115b9-3cc9-44f1-bf72-3141429a5001-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-grmth\" (UID: \"4f9115b9-3cc9-44f1-bf72-3141429a5001\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhl4\" (UniqueName: \"kubernetes.io/projected/4f9115b9-3cc9-44f1-bf72-3141429a5001-kube-api-access-wdhl4\") pod \"nmstate-console-plugin-5c78fc5d65-grmth\" (UID: \"4f9115b9-3cc9-44f1-bf72-3141429a5001\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fml6x\" (UniqueName: \"kubernetes.io/projected/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-kube-api-access-fml6x\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496573 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-ovs-socket\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496611 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-nmstate-lock\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496681 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/91f381c7-9be0-48df-8f9d-1a708710e670-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nsw6x\" (UID: \"91f381c7-9be0-48df-8f9d-1a708710e670\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496751 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqddc\" (UniqueName: \"kubernetes.io/projected/f676a01e-1cc1-482c-933b-5312fed324e2-kube-api-access-kqddc\") pod \"nmstate-metrics-58c85c668d-mbrjj\" (UID: \"f676a01e-1cc1-482c-933b-5312fed324e2\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496758 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-ovs-socket\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.496782 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-dbus-socket\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.497058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-dbus-socket\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.497375 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-nmstate-lock\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.504634 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/91f381c7-9be0-48df-8f9d-1a708710e670-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nsw6x\" (UID: \"91f381c7-9be0-48df-8f9d-1a708710e670\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.512103 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fml6x\" (UniqueName: \"kubernetes.io/projected/339cbe11-a64b-4a7f-b5fc-4f2136c6dfac-kube-api-access-fml6x\") pod \"nmstate-handler-x5dj7\" (UID: \"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac\") " pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.516661 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbmpw\" (UniqueName: \"kubernetes.io/projected/91f381c7-9be0-48df-8f9d-1a708710e670-kube-api-access-gbmpw\") pod \"nmstate-webhook-866bcb46dc-nsw6x\" (UID: \"91f381c7-9be0-48df-8f9d-1a708710e670\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.519352 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqddc\" (UniqueName: \"kubernetes.io/projected/f676a01e-1cc1-482c-933b-5312fed324e2-kube-api-access-kqddc\") pod \"nmstate-metrics-58c85c668d-mbrjj\" (UID: \"f676a01e-1cc1-482c-933b-5312fed324e2\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.598637 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhl4\" (UniqueName: \"kubernetes.io/projected/4f9115b9-3cc9-44f1-bf72-3141429a5001-kube-api-access-wdhl4\") pod \"nmstate-console-plugin-5c78fc5d65-grmth\" (UID: \"4f9115b9-3cc9-44f1-bf72-3141429a5001\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.599029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4f9115b9-3cc9-44f1-bf72-3141429a5001-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-grmth\" (UID: \"4f9115b9-3cc9-44f1-bf72-3141429a5001\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.599113 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9115b9-3cc9-44f1-bf72-3141429a5001-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-grmth\" (UID: \"4f9115b9-3cc9-44f1-bf72-3141429a5001\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.599810 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4f9115b9-3cc9-44f1-bf72-3141429a5001-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-grmth\" (UID: \"4f9115b9-3cc9-44f1-bf72-3141429a5001\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.603344 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9115b9-3cc9-44f1-bf72-3141429a5001-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-grmth\" (UID: \"4f9115b9-3cc9-44f1-bf72-3141429a5001\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.614917 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cb64748c-ggr6n"] Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.615931 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.621689 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhl4\" (UniqueName: \"kubernetes.io/projected/4f9115b9-3cc9-44f1-bf72-3141429a5001-kube-api-access-wdhl4\") pod \"nmstate-console-plugin-5c78fc5d65-grmth\" (UID: \"4f9115b9-3cc9-44f1-bf72-3141429a5001\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.630686 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cb64748c-ggr6n"] Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.640379 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.651013 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.659540 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.702092 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-trusted-ca-bundle\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.702228 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-oauth-serving-cert\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.702307 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-oauth-config\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.702522 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cstgl\" (UniqueName: \"kubernetes.io/projected/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-kube-api-access-cstgl\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.703945 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-config\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.704032 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-service-ca\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.704085 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-serving-cert\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.749134 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.807851 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-config\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.807910 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-service-ca\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.807943 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-serving-cert\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.807992 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-trusted-ca-bundle\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.808016 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-oauth-serving-cert\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.808067 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-oauth-config\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.808114 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cstgl\" (UniqueName: \"kubernetes.io/projected/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-kube-api-access-cstgl\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.808807 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-service-ca\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.809351 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-oauth-serving-cert\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.810067 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-trusted-ca-bundle\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.810629 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-config\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.816235 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-serving-cert\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.826465 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-oauth-config\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.830415 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cstgl\" (UniqueName: \"kubernetes.io/projected/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-kube-api-access-cstgl\") pod \"console-7cb64748c-ggr6n\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:45 crc kubenswrapper[4805]: I0216 21:09:45.989732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:46 crc kubenswrapper[4805]: I0216 21:09:46.110648 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj"] Feb 16 21:09:46 crc kubenswrapper[4805]: I0216 21:09:46.212435 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x"] Feb 16 21:09:46 crc kubenswrapper[4805]: W0216 21:09:46.226287 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91f381c7_9be0_48df_8f9d_1a708710e670.slice/crio-f65e0a4abec5336f45e170fc3f5448696408b7697761e12f332927757a61df2b WatchSource:0}: Error finding container f65e0a4abec5336f45e170fc3f5448696408b7697761e12f332927757a61df2b: Status 404 returned error can't find the container with id f65e0a4abec5336f45e170fc3f5448696408b7697761e12f332927757a61df2b Feb 16 21:09:46 crc kubenswrapper[4805]: I0216 21:09:46.235714 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth"] Feb 16 21:09:46 crc kubenswrapper[4805]: I0216 21:09:46.286348 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-x5dj7" event={"ID":"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac","Type":"ContainerStarted","Data":"9800d8a3e7408d3f426783b48e18cd683c39aecd6fce57a453fc4785ffb0a13f"} Feb 16 21:09:46 crc kubenswrapper[4805]: I0216 21:09:46.290186 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj" event={"ID":"f676a01e-1cc1-482c-933b-5312fed324e2","Type":"ContainerStarted","Data":"51cc8cd89a60d9138a8f3dc06e6ed84f8f024c656d2c703f9207bc937ff9896f"} Feb 16 21:09:46 crc kubenswrapper[4805]: I0216 21:09:46.291811 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" event={"ID":"4f9115b9-3cc9-44f1-bf72-3141429a5001","Type":"ContainerStarted","Data":"3dc5b333eb032a915f7984807c8e17b9f48cc75c121bc20d2d3709d10d10f2e7"} Feb 16 21:09:46 crc kubenswrapper[4805]: I0216 21:09:46.293356 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" event={"ID":"91f381c7-9be0-48df-8f9d-1a708710e670","Type":"ContainerStarted","Data":"f65e0a4abec5336f45e170fc3f5448696408b7697761e12f332927757a61df2b"} Feb 16 21:09:46 crc kubenswrapper[4805]: I0216 21:09:46.489765 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cb64748c-ggr6n"] Feb 16 21:09:46 crc kubenswrapper[4805]: W0216 21:09:46.491545 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ba8a3e9_b193_4ec2_983d_e5ce4efd302f.slice/crio-8791cf5b98e122f0b32c99941ec157eca6c2101af35c57577a8f53a3c29db66e WatchSource:0}: Error finding container 8791cf5b98e122f0b32c99941ec157eca6c2101af35c57577a8f53a3c29db66e: Status 404 returned error can't find the container with id 8791cf5b98e122f0b32c99941ec157eca6c2101af35c57577a8f53a3c29db66e Feb 16 21:09:47 crc kubenswrapper[4805]: I0216 21:09:47.308345 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cb64748c-ggr6n" event={"ID":"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f","Type":"ContainerStarted","Data":"cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66"} Feb 16 21:09:47 crc kubenswrapper[4805]: I0216 21:09:47.310617 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cb64748c-ggr6n" event={"ID":"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f","Type":"ContainerStarted","Data":"8791cf5b98e122f0b32c99941ec157eca6c2101af35c57577a8f53a3c29db66e"} Feb 16 21:09:47 crc kubenswrapper[4805]: I0216 21:09:47.332048 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cb64748c-ggr6n" podStartSLOduration=2.33202886 podStartE2EDuration="2.33202886s" podCreationTimestamp="2026-02-16 21:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:09:47.330680025 +0000 UTC m=+805.149363370" watchObservedRunningTime="2026-02-16 21:09:47.33202886 +0000 UTC m=+805.150712165" Feb 16 21:09:49 crc kubenswrapper[4805]: I0216 21:09:49.329922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" event={"ID":"4f9115b9-3cc9-44f1-bf72-3141429a5001","Type":"ContainerStarted","Data":"201d1ce8e79e5bbd86506b4739e4e908f5ffaa1b5790f9d46a6979588c0a01ae"} Feb 16 21:09:49 crc kubenswrapper[4805]: I0216 21:09:49.360631 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-grmth" podStartSLOduration=1.884695732 podStartE2EDuration="4.360600363s" podCreationTimestamp="2026-02-16 21:09:45 +0000 UTC" firstStartedPulling="2026-02-16 21:09:46.253713578 +0000 UTC m=+804.072396873" lastFinishedPulling="2026-02-16 21:09:48.729618199 +0000 UTC m=+806.548301504" observedRunningTime="2026-02-16 21:09:49.349622634 +0000 UTC m=+807.168305969" watchObservedRunningTime="2026-02-16 21:09:49.360600363 +0000 UTC m=+807.179283688" Feb 16 21:09:51 crc kubenswrapper[4805]: I0216 21:09:51.697631 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:51 crc kubenswrapper[4805]: I0216 21:09:51.757075 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:51 crc kubenswrapper[4805]: I0216 21:09:51.942375 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jnd75"] Feb 16 21:09:52 crc kubenswrapper[4805]: I0216 21:09:52.376179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" event={"ID":"91f381c7-9be0-48df-8f9d-1a708710e670","Type":"ContainerStarted","Data":"87f14b257fe2caf4785874c1ca9034bfa29b7ccd95d12d6b02f8cf26439a15a0"} Feb 16 21:09:52 crc kubenswrapper[4805]: I0216 21:09:52.377783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-x5dj7" event={"ID":"339cbe11-a64b-4a7f-b5fc-4f2136c6dfac","Type":"ContainerStarted","Data":"0984a04b3a261dbec1db7bc5871303a7acd9c7fbf9ef929386449e283c1fb3a9"} Feb 16 21:09:52 crc kubenswrapper[4805]: I0216 21:09:52.378495 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:09:52 crc kubenswrapper[4805]: I0216 21:09:52.380050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj" event={"ID":"f676a01e-1cc1-482c-933b-5312fed324e2","Type":"ContainerStarted","Data":"5dbf55205fd51748ba0ab376282ef8ae2f70ff840c54bdee9cc048bf7e366b69"} Feb 16 21:09:52 crc kubenswrapper[4805]: I0216 21:09:52.403189 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" podStartSLOduration=2.463049931 podStartE2EDuration="7.403155524s" podCreationTimestamp="2026-02-16 21:09:45 +0000 UTC" firstStartedPulling="2026-02-16 21:09:46.2351538 +0000 UTC m=+804.053837095" lastFinishedPulling="2026-02-16 21:09:51.175259393 +0000 UTC m=+808.993942688" observedRunningTime="2026-02-16 21:09:52.393538081 +0000 UTC m=+810.212221406" watchObservedRunningTime="2026-02-16 21:09:52.403155524 +0000 UTC m=+810.221838889" Feb 16 21:09:52 crc kubenswrapper[4805]: I0216 21:09:52.451712 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-x5dj7" podStartSLOduration=1.980333242 podStartE2EDuration="7.451694263s" podCreationTimestamp="2026-02-16 21:09:45 +0000 UTC" firstStartedPulling="2026-02-16 21:09:45.718182938 +0000 UTC m=+803.536866233" lastFinishedPulling="2026-02-16 21:09:51.189543919 +0000 UTC m=+809.008227254" observedRunningTime="2026-02-16 21:09:52.448402415 +0000 UTC m=+810.267085730" watchObservedRunningTime="2026-02-16 21:09:52.451694263 +0000 UTC m=+810.270377558" Feb 16 21:09:53 crc kubenswrapper[4805]: I0216 21:09:53.386136 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jnd75" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerName="registry-server" containerID="cri-o://d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb" gracePeriod=2 Feb 16 21:09:53 crc kubenswrapper[4805]: I0216 21:09:53.386616 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:09:53 crc kubenswrapper[4805]: E0216 21:09:53.462515 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2729e04_0a2e_4121_bbfe_50486750b2d4.slice/crio-d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.075467 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.188242 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-catalog-content\") pod \"b2729e04-0a2e-4121-bbfe-50486750b2d4\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.188502 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm625\" (UniqueName: \"kubernetes.io/projected/b2729e04-0a2e-4121-bbfe-50486750b2d4-kube-api-access-lm625\") pod \"b2729e04-0a2e-4121-bbfe-50486750b2d4\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.188615 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-utilities\") pod \"b2729e04-0a2e-4121-bbfe-50486750b2d4\" (UID: \"b2729e04-0a2e-4121-bbfe-50486750b2d4\") " Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.189781 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-utilities" (OuterVolumeSpecName: "utilities") pod "b2729e04-0a2e-4121-bbfe-50486750b2d4" (UID: "b2729e04-0a2e-4121-bbfe-50486750b2d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.196929 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2729e04-0a2e-4121-bbfe-50486750b2d4-kube-api-access-lm625" (OuterVolumeSpecName: "kube-api-access-lm625") pod "b2729e04-0a2e-4121-bbfe-50486750b2d4" (UID: "b2729e04-0a2e-4121-bbfe-50486750b2d4"). InnerVolumeSpecName "kube-api-access-lm625". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.290865 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm625\" (UniqueName: \"kubernetes.io/projected/b2729e04-0a2e-4121-bbfe-50486750b2d4-kube-api-access-lm625\") on node \"crc\" DevicePath \"\"" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.290919 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.300226 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2729e04-0a2e-4121-bbfe-50486750b2d4" (UID: "b2729e04-0a2e-4121-bbfe-50486750b2d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.391796 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2729e04-0a2e-4121-bbfe-50486750b2d4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.396962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj" event={"ID":"f676a01e-1cc1-482c-933b-5312fed324e2","Type":"ContainerStarted","Data":"8c2f311165843a198d28a93099d7f7e20576de6aedc2bcc7f686c8575bd7d610"} Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.399683 4805 generic.go:334] "Generic (PLEG): container finished" podID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerID="d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb" exitCode=0 Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.399798 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnd75" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.399791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnd75" event={"ID":"b2729e04-0a2e-4121-bbfe-50486750b2d4","Type":"ContainerDied","Data":"d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb"} Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.399859 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnd75" event={"ID":"b2729e04-0a2e-4121-bbfe-50486750b2d4","Type":"ContainerDied","Data":"e87db1109348a9eb70b566965f327075b3e50c30d93656082a7da2c9bf39cd96"} Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.399891 4805 scope.go:117] "RemoveContainer" containerID="d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.417308 4805 scope.go:117] "RemoveContainer" containerID="553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.420373 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mbrjj" podStartSLOduration=1.7344398970000001 podStartE2EDuration="9.420343218s" podCreationTimestamp="2026-02-16 21:09:45 +0000 UTC" firstStartedPulling="2026-02-16 21:09:46.152934805 +0000 UTC m=+803.971618100" lastFinishedPulling="2026-02-16 21:09:53.838838086 +0000 UTC m=+811.657521421" observedRunningTime="2026-02-16 21:09:54.417140572 +0000 UTC m=+812.235823877" watchObservedRunningTime="2026-02-16 21:09:54.420343218 +0000 UTC m=+812.239026553" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.453674 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jnd75"] Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.469596 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jnd75"] Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.470569 4805 scope.go:117] "RemoveContainer" containerID="1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.485674 4805 scope.go:117] "RemoveContainer" containerID="d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb" Feb 16 21:09:54 crc kubenswrapper[4805]: E0216 21:09:54.487094 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb\": container with ID starting with d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb not found: ID does not exist" containerID="d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.487144 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb"} err="failed to get container status \"d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb\": rpc error: code = NotFound desc = could not find container \"d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb\": container with ID starting with d5d158ccf511d5d217477e03748e68279ddd3510d1046e2b9c27b0bee32d71fb not found: ID does not exist" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.487172 4805 scope.go:117] "RemoveContainer" containerID="553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419" Feb 16 21:09:54 crc kubenswrapper[4805]: E0216 21:09:54.488023 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419\": container with ID starting with 553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419 not found: ID does not exist" containerID="553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.488044 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419"} err="failed to get container status \"553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419\": rpc error: code = NotFound desc = could not find container \"553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419\": container with ID starting with 553d53f4ddc71efd3e6cc33ca8b78386fbe0c4354756d0871a414e04e29e1419 not found: ID does not exist" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.488057 4805 scope.go:117] "RemoveContainer" containerID="1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515" Feb 16 21:09:54 crc kubenswrapper[4805]: E0216 21:09:54.488249 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515\": container with ID starting with 1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515 not found: ID does not exist" containerID="1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515" Feb 16 21:09:54 crc kubenswrapper[4805]: I0216 21:09:54.488271 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515"} err="failed to get container status \"1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515\": rpc error: code = NotFound desc = could not find container \"1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515\": container with ID starting with 1e280e01d531f367d38de0835fddffc9b665736c7f8eb9aeb7cc8e65a5ab8515 not found: ID does not exist" Feb 16 21:09:55 crc kubenswrapper[4805]: I0216 21:09:55.610942 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" path="/var/lib/kubelet/pods/b2729e04-0a2e-4121-bbfe-50486750b2d4/volumes" Feb 16 21:09:55 crc kubenswrapper[4805]: I0216 21:09:55.990696 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:55 crc kubenswrapper[4805]: I0216 21:09:55.990795 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:55 crc kubenswrapper[4805]: I0216 21:09:55.999531 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:56 crc kubenswrapper[4805]: I0216 21:09:56.428938 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:09:56 crc kubenswrapper[4805]: I0216 21:09:56.497702 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79f7c68f86-9bv6w"] Feb 16 21:10:00 crc kubenswrapper[4805]: I0216 21:10:00.697258 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-x5dj7" Feb 16 21:10:05 crc kubenswrapper[4805]: I0216 21:10:05.662600 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nsw6x" Feb 16 21:10:21 crc kubenswrapper[4805]: I0216 21:10:21.546595 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-79f7c68f86-9bv6w" podUID="f054a68c-ddb3-440e-87c2-dc1444078331" containerName="console" containerID="cri-o://6db40e61404d6a64b3964d7809ccc9ccf4d25aa188ffae9a32a88853ce0b9f1d" gracePeriod=15 Feb 16 21:10:21 crc kubenswrapper[4805]: I0216 21:10:21.687468 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79f7c68f86-9bv6w_f054a68c-ddb3-440e-87c2-dc1444078331/console/0.log" Feb 16 21:10:21 crc kubenswrapper[4805]: I0216 21:10:21.687711 4805 generic.go:334] "Generic (PLEG): container finished" podID="f054a68c-ddb3-440e-87c2-dc1444078331" containerID="6db40e61404d6a64b3964d7809ccc9ccf4d25aa188ffae9a32a88853ce0b9f1d" exitCode=2 Feb 16 21:10:21 crc kubenswrapper[4805]: I0216 21:10:21.687753 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79f7c68f86-9bv6w" event={"ID":"f054a68c-ddb3-440e-87c2-dc1444078331","Type":"ContainerDied","Data":"6db40e61404d6a64b3964d7809ccc9ccf4d25aa188ffae9a32a88853ce0b9f1d"} Feb 16 21:10:21 crc kubenswrapper[4805]: I0216 21:10:21.935150 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79f7c68f86-9bv6w_f054a68c-ddb3-440e-87c2-dc1444078331/console/0.log" Feb 16 21:10:21 crc kubenswrapper[4805]: I0216 21:10:21.935224 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.029348 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-serving-cert\") pod \"f054a68c-ddb3-440e-87c2-dc1444078331\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.029436 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk9zt\" (UniqueName: \"kubernetes.io/projected/f054a68c-ddb3-440e-87c2-dc1444078331-kube-api-access-dk9zt\") pod \"f054a68c-ddb3-440e-87c2-dc1444078331\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.029452 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-oauth-config\") pod \"f054a68c-ddb3-440e-87c2-dc1444078331\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.029468 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-trusted-ca-bundle\") pod \"f054a68c-ddb3-440e-87c2-dc1444078331\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.029492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-console-config\") pod \"f054a68c-ddb3-440e-87c2-dc1444078331\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.029554 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-oauth-serving-cert\") pod \"f054a68c-ddb3-440e-87c2-dc1444078331\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.029575 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-service-ca\") pod \"f054a68c-ddb3-440e-87c2-dc1444078331\" (UID: \"f054a68c-ddb3-440e-87c2-dc1444078331\") " Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.030508 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-service-ca" (OuterVolumeSpecName: "service-ca") pod "f054a68c-ddb3-440e-87c2-dc1444078331" (UID: "f054a68c-ddb3-440e-87c2-dc1444078331"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.030585 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f054a68c-ddb3-440e-87c2-dc1444078331" (UID: "f054a68c-ddb3-440e-87c2-dc1444078331"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.030659 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f054a68c-ddb3-440e-87c2-dc1444078331" (UID: "f054a68c-ddb3-440e-87c2-dc1444078331"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.030797 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-console-config" (OuterVolumeSpecName: "console-config") pod "f054a68c-ddb3-440e-87c2-dc1444078331" (UID: "f054a68c-ddb3-440e-87c2-dc1444078331"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.038851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f054a68c-ddb3-440e-87c2-dc1444078331" (UID: "f054a68c-ddb3-440e-87c2-dc1444078331"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.038875 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f054a68c-ddb3-440e-87c2-dc1444078331" (UID: "f054a68c-ddb3-440e-87c2-dc1444078331"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.038874 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f054a68c-ddb3-440e-87c2-dc1444078331-kube-api-access-dk9zt" (OuterVolumeSpecName: "kube-api-access-dk9zt") pod "f054a68c-ddb3-440e-87c2-dc1444078331" (UID: "f054a68c-ddb3-440e-87c2-dc1444078331"). InnerVolumeSpecName "kube-api-access-dk9zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.131264 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.131294 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk9zt\" (UniqueName: \"kubernetes.io/projected/f054a68c-ddb3-440e-87c2-dc1444078331-kube-api-access-dk9zt\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.131306 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f054a68c-ddb3-440e-87c2-dc1444078331-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.131314 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.131323 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.131331 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.131340 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f054a68c-ddb3-440e-87c2-dc1444078331-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.696586 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79f7c68f86-9bv6w_f054a68c-ddb3-440e-87c2-dc1444078331/console/0.log" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.697031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79f7c68f86-9bv6w" event={"ID":"f054a68c-ddb3-440e-87c2-dc1444078331","Type":"ContainerDied","Data":"e26b5445db7af0bda1f51a15f6a1ee03128b1fb7a49cd5605179b8c9ae9d8105"} Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.697071 4805 scope.go:117] "RemoveContainer" containerID="6db40e61404d6a64b3964d7809ccc9ccf4d25aa188ffae9a32a88853ce0b9f1d" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.697181 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79f7c68f86-9bv6w" Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.737136 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79f7c68f86-9bv6w"] Feb 16 21:10:22 crc kubenswrapper[4805]: I0216 21:10:22.744804 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-79f7c68f86-9bv6w"] Feb 16 21:10:23 crc kubenswrapper[4805]: I0216 21:10:23.614660 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f054a68c-ddb3-440e-87c2-dc1444078331" path="/var/lib/kubelet/pods/f054a68c-ddb3-440e-87c2-dc1444078331/volumes" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.134847 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42"] Feb 16 21:10:25 crc kubenswrapper[4805]: E0216 21:10:25.135504 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f054a68c-ddb3-440e-87c2-dc1444078331" containerName="console" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.135520 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f054a68c-ddb3-440e-87c2-dc1444078331" containerName="console" Feb 16 21:10:25 crc kubenswrapper[4805]: E0216 21:10:25.135549 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerName="registry-server" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.135558 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerName="registry-server" Feb 16 21:10:25 crc kubenswrapper[4805]: E0216 21:10:25.135572 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerName="extract-utilities" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.135581 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerName="extract-utilities" Feb 16 21:10:25 crc kubenswrapper[4805]: E0216 21:10:25.135598 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerName="extract-content" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.135609 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerName="extract-content" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.135792 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2729e04-0a2e-4121-bbfe-50486750b2d4" containerName="registry-server" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.135819 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f054a68c-ddb3-440e-87c2-dc1444078331" containerName="console" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.137135 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.143178 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.145556 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42"] Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.184990 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cqv9\" (UniqueName: \"kubernetes.io/projected/75a7675d-39b3-49c2-8ffb-bcec428f29b3-kube-api-access-2cqv9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.185099 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.185154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.287248 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.287503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cqv9\" (UniqueName: \"kubernetes.io/projected/75a7675d-39b3-49c2-8ffb-bcec428f29b3-kube-api-access-2cqv9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.287583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.287965 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.288322 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.325130 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cqv9\" (UniqueName: \"kubernetes.io/projected/75a7675d-39b3-49c2-8ffb-bcec428f29b3-kube-api-access-2cqv9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.469911 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:25 crc kubenswrapper[4805]: I0216 21:10:25.779573 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42"] Feb 16 21:10:25 crc kubenswrapper[4805]: W0216 21:10:25.787649 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75a7675d_39b3_49c2_8ffb_bcec428f29b3.slice/crio-260e119c8bb52de50ee4c140b175b5ba6e2bcf82ea38424ebdce08099296e74a WatchSource:0}: Error finding container 260e119c8bb52de50ee4c140b175b5ba6e2bcf82ea38424ebdce08099296e74a: Status 404 returned error can't find the container with id 260e119c8bb52de50ee4c140b175b5ba6e2bcf82ea38424ebdce08099296e74a Feb 16 21:10:26 crc kubenswrapper[4805]: I0216 21:10:26.742794 4805 generic.go:334] "Generic (PLEG): container finished" podID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerID="2a0fdcd4770a7df2fc04fe02c657eb98311ca3cce9d6f35c1e02808d90fa6b25" exitCode=0 Feb 16 21:10:26 crc kubenswrapper[4805]: I0216 21:10:26.742887 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" event={"ID":"75a7675d-39b3-49c2-8ffb-bcec428f29b3","Type":"ContainerDied","Data":"2a0fdcd4770a7df2fc04fe02c657eb98311ca3cce9d6f35c1e02808d90fa6b25"} Feb 16 21:10:26 crc kubenswrapper[4805]: I0216 21:10:26.743081 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" event={"ID":"75a7675d-39b3-49c2-8ffb-bcec428f29b3","Type":"ContainerStarted","Data":"260e119c8bb52de50ee4c140b175b5ba6e2bcf82ea38424ebdce08099296e74a"} Feb 16 21:10:26 crc kubenswrapper[4805]: I0216 21:10:26.745843 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:10:28 crc kubenswrapper[4805]: I0216 21:10:28.759018 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" event={"ID":"75a7675d-39b3-49c2-8ffb-bcec428f29b3","Type":"ContainerStarted","Data":"8c0221ae126b3430c8b53c57312aaab235cd6be9f32ba771963cf5fe2799e3be"} Feb 16 21:10:29 crc kubenswrapper[4805]: I0216 21:10:29.768704 4805 generic.go:334] "Generic (PLEG): container finished" podID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerID="8c0221ae126b3430c8b53c57312aaab235cd6be9f32ba771963cf5fe2799e3be" exitCode=0 Feb 16 21:10:29 crc kubenswrapper[4805]: I0216 21:10:29.768806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" event={"ID":"75a7675d-39b3-49c2-8ffb-bcec428f29b3","Type":"ContainerDied","Data":"8c0221ae126b3430c8b53c57312aaab235cd6be9f32ba771963cf5fe2799e3be"} Feb 16 21:10:30 crc kubenswrapper[4805]: I0216 21:10:30.781225 4805 generic.go:334] "Generic (PLEG): container finished" podID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerID="a59b08626951023abcbf4c36e8e26b86b4091af9c504f8b906d1da71676c7827" exitCode=0 Feb 16 21:10:30 crc kubenswrapper[4805]: I0216 21:10:30.781290 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" event={"ID":"75a7675d-39b3-49c2-8ffb-bcec428f29b3","Type":"ContainerDied","Data":"a59b08626951023abcbf4c36e8e26b86b4091af9c504f8b906d1da71676c7827"} Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.133237 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.326318 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-bundle\") pod \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.326447 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cqv9\" (UniqueName: \"kubernetes.io/projected/75a7675d-39b3-49c2-8ffb-bcec428f29b3-kube-api-access-2cqv9\") pod \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.326487 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-util\") pod \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\" (UID: \"75a7675d-39b3-49c2-8ffb-bcec428f29b3\") " Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.327271 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-bundle" (OuterVolumeSpecName: "bundle") pod "75a7675d-39b3-49c2-8ffb-bcec428f29b3" (UID: "75a7675d-39b3-49c2-8ffb-bcec428f29b3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.343155 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-util" (OuterVolumeSpecName: "util") pod "75a7675d-39b3-49c2-8ffb-bcec428f29b3" (UID: "75a7675d-39b3-49c2-8ffb-bcec428f29b3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.343579 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a7675d-39b3-49c2-8ffb-bcec428f29b3-kube-api-access-2cqv9" (OuterVolumeSpecName: "kube-api-access-2cqv9") pod "75a7675d-39b3-49c2-8ffb-bcec428f29b3" (UID: "75a7675d-39b3-49c2-8ffb-bcec428f29b3"). InnerVolumeSpecName "kube-api-access-2cqv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.427889 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.427925 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cqv9\" (UniqueName: \"kubernetes.io/projected/75a7675d-39b3-49c2-8ffb-bcec428f29b3-kube-api-access-2cqv9\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.427936 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75a7675d-39b3-49c2-8ffb-bcec428f29b3-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.803609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" event={"ID":"75a7675d-39b3-49c2-8ffb-bcec428f29b3","Type":"ContainerDied","Data":"260e119c8bb52de50ee4c140b175b5ba6e2bcf82ea38424ebdce08099296e74a"} Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.804056 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="260e119c8bb52de50ee4c140b175b5ba6e2bcf82ea38424ebdce08099296e74a" Feb 16 21:10:32 crc kubenswrapper[4805]: I0216 21:10:32.803655 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.947825 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7765589444-2hjkq"] Feb 16 21:10:43 crc kubenswrapper[4805]: E0216 21:10:43.948618 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerName="extract" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.948631 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerName="extract" Feb 16 21:10:43 crc kubenswrapper[4805]: E0216 21:10:43.948652 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerName="pull" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.948657 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerName="pull" Feb 16 21:10:43 crc kubenswrapper[4805]: E0216 21:10:43.948673 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerName="util" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.948691 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerName="util" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.948847 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a7675d-39b3-49c2-8ffb-bcec428f29b3" containerName="extract" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.949354 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.950918 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.951372 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.953904 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-kw8zl" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.953916 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.954165 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 21:10:43 crc kubenswrapper[4805]: I0216 21:10:43.962775 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7765589444-2hjkq"] Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.018460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e255f1c2-9a99-44b8-830f-56015433f783-apiservice-cert\") pod \"metallb-operator-controller-manager-7765589444-2hjkq\" (UID: \"e255f1c2-9a99-44b8-830f-56015433f783\") " pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.018852 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kddv9\" (UniqueName: \"kubernetes.io/projected/e255f1c2-9a99-44b8-830f-56015433f783-kube-api-access-kddv9\") pod \"metallb-operator-controller-manager-7765589444-2hjkq\" (UID: \"e255f1c2-9a99-44b8-830f-56015433f783\") " pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.018898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e255f1c2-9a99-44b8-830f-56015433f783-webhook-cert\") pod \"metallb-operator-controller-manager-7765589444-2hjkq\" (UID: \"e255f1c2-9a99-44b8-830f-56015433f783\") " pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.122773 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e255f1c2-9a99-44b8-830f-56015433f783-apiservice-cert\") pod \"metallb-operator-controller-manager-7765589444-2hjkq\" (UID: \"e255f1c2-9a99-44b8-830f-56015433f783\") " pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.127918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kddv9\" (UniqueName: \"kubernetes.io/projected/e255f1c2-9a99-44b8-830f-56015433f783-kube-api-access-kddv9\") pod \"metallb-operator-controller-manager-7765589444-2hjkq\" (UID: \"e255f1c2-9a99-44b8-830f-56015433f783\") " pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.128156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e255f1c2-9a99-44b8-830f-56015433f783-webhook-cert\") pod \"metallb-operator-controller-manager-7765589444-2hjkq\" (UID: \"e255f1c2-9a99-44b8-830f-56015433f783\") " pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.132992 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e255f1c2-9a99-44b8-830f-56015433f783-webhook-cert\") pod \"metallb-operator-controller-manager-7765589444-2hjkq\" (UID: \"e255f1c2-9a99-44b8-830f-56015433f783\") " pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.141650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e255f1c2-9a99-44b8-830f-56015433f783-apiservice-cert\") pod \"metallb-operator-controller-manager-7765589444-2hjkq\" (UID: \"e255f1c2-9a99-44b8-830f-56015433f783\") " pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.145792 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kddv9\" (UniqueName: \"kubernetes.io/projected/e255f1c2-9a99-44b8-830f-56015433f783-kube-api-access-kddv9\") pod \"metallb-operator-controller-manager-7765589444-2hjkq\" (UID: \"e255f1c2-9a99-44b8-830f-56015433f783\") " pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.200894 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt"] Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.201964 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.207173 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2hx7p" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.207355 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.207508 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.217187 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt"] Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.230299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c56fcb42-00d2-410a-9aec-183240413d1c-webhook-cert\") pod \"metallb-operator-webhook-server-7978c795d6-h8bpt\" (UID: \"c56fcb42-00d2-410a-9aec-183240413d1c\") " pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.230366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c56fcb42-00d2-410a-9aec-183240413d1c-apiservice-cert\") pod \"metallb-operator-webhook-server-7978c795d6-h8bpt\" (UID: \"c56fcb42-00d2-410a-9aec-183240413d1c\") " pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.230422 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xw9b\" (UniqueName: \"kubernetes.io/projected/c56fcb42-00d2-410a-9aec-183240413d1c-kube-api-access-8xw9b\") pod \"metallb-operator-webhook-server-7978c795d6-h8bpt\" (UID: \"c56fcb42-00d2-410a-9aec-183240413d1c\") " pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.318478 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.332177 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c56fcb42-00d2-410a-9aec-183240413d1c-apiservice-cert\") pod \"metallb-operator-webhook-server-7978c795d6-h8bpt\" (UID: \"c56fcb42-00d2-410a-9aec-183240413d1c\") " pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.332258 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xw9b\" (UniqueName: \"kubernetes.io/projected/c56fcb42-00d2-410a-9aec-183240413d1c-kube-api-access-8xw9b\") pod \"metallb-operator-webhook-server-7978c795d6-h8bpt\" (UID: \"c56fcb42-00d2-410a-9aec-183240413d1c\") " pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.332308 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c56fcb42-00d2-410a-9aec-183240413d1c-webhook-cert\") pod \"metallb-operator-webhook-server-7978c795d6-h8bpt\" (UID: \"c56fcb42-00d2-410a-9aec-183240413d1c\") " pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.342246 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c56fcb42-00d2-410a-9aec-183240413d1c-apiservice-cert\") pod \"metallb-operator-webhook-server-7978c795d6-h8bpt\" (UID: \"c56fcb42-00d2-410a-9aec-183240413d1c\") " pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.345393 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c56fcb42-00d2-410a-9aec-183240413d1c-webhook-cert\") pod \"metallb-operator-webhook-server-7978c795d6-h8bpt\" (UID: \"c56fcb42-00d2-410a-9aec-183240413d1c\") " pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.361310 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xw9b\" (UniqueName: \"kubernetes.io/projected/c56fcb42-00d2-410a-9aec-183240413d1c-kube-api-access-8xw9b\") pod \"metallb-operator-webhook-server-7978c795d6-h8bpt\" (UID: \"c56fcb42-00d2-410a-9aec-183240413d1c\") " pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.523585 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.784301 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7765589444-2hjkq"] Feb 16 21:10:44 crc kubenswrapper[4805]: W0216 21:10:44.801884 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode255f1c2_9a99_44b8_830f_56015433f783.slice/crio-17efd214d0dcbf30ef8dcde954287af3808915545b78dce405c525a8465b64e2 WatchSource:0}: Error finding container 17efd214d0dcbf30ef8dcde954287af3808915545b78dce405c525a8465b64e2: Status 404 returned error can't find the container with id 17efd214d0dcbf30ef8dcde954287af3808915545b78dce405c525a8465b64e2 Feb 16 21:10:44 crc kubenswrapper[4805]: I0216 21:10:44.891610 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" event={"ID":"e255f1c2-9a99-44b8-830f-56015433f783","Type":"ContainerStarted","Data":"17efd214d0dcbf30ef8dcde954287af3808915545b78dce405c525a8465b64e2"} Feb 16 21:10:45 crc kubenswrapper[4805]: I0216 21:10:45.067826 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt"] Feb 16 21:10:45 crc kubenswrapper[4805]: I0216 21:10:45.900349 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" event={"ID":"c56fcb42-00d2-410a-9aec-183240413d1c","Type":"ContainerStarted","Data":"57d0f01d18d60c60fa6d4f545c9122736a9a6c5e006b8503c6824b7f9506733f"} Feb 16 21:10:48 crc kubenswrapper[4805]: I0216 21:10:48.928036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" event={"ID":"e255f1c2-9a99-44b8-830f-56015433f783","Type":"ContainerStarted","Data":"d54f60ca82542986cae1911e79fe436cd2868ff985ee13c6d22815f9aa1af03c"} Feb 16 21:10:48 crc kubenswrapper[4805]: I0216 21:10:48.928602 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:10:48 crc kubenswrapper[4805]: I0216 21:10:48.957507 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" podStartSLOduration=2.454360637 podStartE2EDuration="5.957491788s" podCreationTimestamp="2026-02-16 21:10:43 +0000 UTC" firstStartedPulling="2026-02-16 21:10:44.803743737 +0000 UTC m=+862.622427042" lastFinishedPulling="2026-02-16 21:10:48.306874898 +0000 UTC m=+866.125558193" observedRunningTime="2026-02-16 21:10:48.95497891 +0000 UTC m=+866.773662225" watchObservedRunningTime="2026-02-16 21:10:48.957491788 +0000 UTC m=+866.776175083" Feb 16 21:10:51 crc kubenswrapper[4805]: I0216 21:10:51.950961 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" event={"ID":"c56fcb42-00d2-410a-9aec-183240413d1c","Type":"ContainerStarted","Data":"29061f9d64463be0cbd98a53405fe4811ee85f1a6ba5a3695945b9e141cbf694"} Feb 16 21:10:51 crc kubenswrapper[4805]: I0216 21:10:51.951121 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:10:51 crc kubenswrapper[4805]: I0216 21:10:51.971850 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" podStartSLOduration=1.888367548 podStartE2EDuration="7.97182825s" podCreationTimestamp="2026-02-16 21:10:44 +0000 UTC" firstStartedPulling="2026-02-16 21:10:45.094607862 +0000 UTC m=+862.913291157" lastFinishedPulling="2026-02-16 21:10:51.178068574 +0000 UTC m=+868.996751859" observedRunningTime="2026-02-16 21:10:51.967463792 +0000 UTC m=+869.786147097" watchObservedRunningTime="2026-02-16 21:10:51.97182825 +0000 UTC m=+869.790511545" Feb 16 21:11:04 crc kubenswrapper[4805]: I0216 21:11:04.527981 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7978c795d6-h8bpt" Feb 16 21:11:08 crc kubenswrapper[4805]: I0216 21:11:08.099813 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:11:08 crc kubenswrapper[4805]: I0216 21:11:08.099898 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:11:24 crc kubenswrapper[4805]: I0216 21:11:24.321605 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7765589444-2hjkq" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.097669 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-87c7n"] Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.108766 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.110783 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.111322 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-b6vb4" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.124357 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb"] Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.125281 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.132314 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.132330 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.141657 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb"] Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.206977 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9ttm2"] Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.208141 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.210020 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqx6\" (UniqueName: \"kubernetes.io/projected/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-kube-api-access-tzqx6\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.210076 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-metrics-certs\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.210111 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-memberlist\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.210191 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-metallb-excludel2\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.215114 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.215293 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qmkbw" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.215471 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.215560 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.218922 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-bbdrb"] Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.223854 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.225530 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.236634 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-bbdrb"] Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311478 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-frr-conf\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nxl\" (UniqueName: \"kubernetes.io/projected/df2d01e8-01e1-48db-96d7-1ef79c926d5a-kube-api-access-m5nxl\") pod \"controller-69bbfbf88f-bbdrb\" (UID: \"df2d01e8-01e1-48db-96d7-1ef79c926d5a\") " pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-metrics-certs\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311592 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-frr-startup\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311624 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-reloader\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzqx6\" (UniqueName: \"kubernetes.io/projected/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-kube-api-access-tzqx6\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311674 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b9f819d-da9a-4b13-b0fb-70e11f25fb3f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7spzb\" (UID: \"0b9f819d-da9a-4b13-b0fb-70e11f25fb3f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311702 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-metrics-certs\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311749 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-memberlist\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlsg5\" (UniqueName: \"kubernetes.io/projected/0b9f819d-da9a-4b13-b0fb-70e11f25fb3f-kube-api-access-mlsg5\") pod \"frr-k8s-webhook-server-78b44bf5bb-7spzb\" (UID: \"0b9f819d-da9a-4b13-b0fb-70e11f25fb3f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:25 crc kubenswrapper[4805]: E0216 21:11:25.311834 4805 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 16 21:11:25 crc kubenswrapper[4805]: E0216 21:11:25.311885 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-metrics-certs podName:a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289 nodeName:}" failed. No retries permitted until 2026-02-16 21:11:25.811870525 +0000 UTC m=+903.630553820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-metrics-certs") pod "speaker-9ttm2" (UID: "a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289") : secret "speaker-certs-secret" not found Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.311823 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-metallb-excludel2\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: E0216 21:11:25.311931 4805 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 21:11:25 crc kubenswrapper[4805]: E0216 21:11:25.312002 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-memberlist podName:a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289 nodeName:}" failed. No retries permitted until 2026-02-16 21:11:25.811985248 +0000 UTC m=+903.630668543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-memberlist") pod "speaker-9ttm2" (UID: "a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289") : secret "metallb-memberlist" not found Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.312027 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-metrics\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.312075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df2d01e8-01e1-48db-96d7-1ef79c926d5a-cert\") pod \"controller-69bbfbf88f-bbdrb\" (UID: \"df2d01e8-01e1-48db-96d7-1ef79c926d5a\") " pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.312130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df2d01e8-01e1-48db-96d7-1ef79c926d5a-metrics-certs\") pod \"controller-69bbfbf88f-bbdrb\" (UID: \"df2d01e8-01e1-48db-96d7-1ef79c926d5a\") " pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.312157 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-frr-sockets\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.312176 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47wcb\" (UniqueName: \"kubernetes.io/projected/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-kube-api-access-47wcb\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.312493 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-metallb-excludel2\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.331775 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzqx6\" (UniqueName: \"kubernetes.io/projected/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-kube-api-access-tzqx6\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413242 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlsg5\" (UniqueName: \"kubernetes.io/projected/0b9f819d-da9a-4b13-b0fb-70e11f25fb3f-kube-api-access-mlsg5\") pod \"frr-k8s-webhook-server-78b44bf5bb-7spzb\" (UID: \"0b9f819d-da9a-4b13-b0fb-70e11f25fb3f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413311 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-metrics\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413331 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df2d01e8-01e1-48db-96d7-1ef79c926d5a-cert\") pod \"controller-69bbfbf88f-bbdrb\" (UID: \"df2d01e8-01e1-48db-96d7-1ef79c926d5a\") " pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413359 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df2d01e8-01e1-48db-96d7-1ef79c926d5a-metrics-certs\") pod \"controller-69bbfbf88f-bbdrb\" (UID: \"df2d01e8-01e1-48db-96d7-1ef79c926d5a\") " pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413378 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-frr-sockets\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47wcb\" (UniqueName: \"kubernetes.io/projected/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-kube-api-access-47wcb\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413411 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-frr-conf\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nxl\" (UniqueName: \"kubernetes.io/projected/df2d01e8-01e1-48db-96d7-1ef79c926d5a-kube-api-access-m5nxl\") pod \"controller-69bbfbf88f-bbdrb\" (UID: \"df2d01e8-01e1-48db-96d7-1ef79c926d5a\") " pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413445 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-metrics-certs\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-frr-startup\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413487 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-reloader\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413514 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b9f819d-da9a-4b13-b0fb-70e11f25fb3f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7spzb\" (UID: \"0b9f819d-da9a-4b13-b0fb-70e11f25fb3f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.413963 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-metrics\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.414275 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-frr-sockets\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.414282 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-frr-conf\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.414524 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-reloader\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.414941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-frr-startup\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.416777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b9f819d-da9a-4b13-b0fb-70e11f25fb3f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7spzb\" (UID: \"0b9f819d-da9a-4b13-b0fb-70e11f25fb3f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.417105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-metrics-certs\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.417263 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df2d01e8-01e1-48db-96d7-1ef79c926d5a-metrics-certs\") pod \"controller-69bbfbf88f-bbdrb\" (UID: \"df2d01e8-01e1-48db-96d7-1ef79c926d5a\") " pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.420452 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.428536 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df2d01e8-01e1-48db-96d7-1ef79c926d5a-cert\") pod \"controller-69bbfbf88f-bbdrb\" (UID: \"df2d01e8-01e1-48db-96d7-1ef79c926d5a\") " pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.439390 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlsg5\" (UniqueName: \"kubernetes.io/projected/0b9f819d-da9a-4b13-b0fb-70e11f25fb3f-kube-api-access-mlsg5\") pod \"frr-k8s-webhook-server-78b44bf5bb-7spzb\" (UID: \"0b9f819d-da9a-4b13-b0fb-70e11f25fb3f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.448286 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.455170 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47wcb\" (UniqueName: \"kubernetes.io/projected/b729a8ff-87a7-4ed1-9af8-d2da4849e89c-kube-api-access-47wcb\") pod \"frr-k8s-87c7n\" (UID: \"b729a8ff-87a7-4ed1-9af8-d2da4849e89c\") " pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.460822 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nxl\" (UniqueName: \"kubernetes.io/projected/df2d01e8-01e1-48db-96d7-1ef79c926d5a-kube-api-access-m5nxl\") pod \"controller-69bbfbf88f-bbdrb\" (UID: \"df2d01e8-01e1-48db-96d7-1ef79c926d5a\") " pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.541488 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.733418 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.825830 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-metrics-certs\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.826148 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-memberlist\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: E0216 21:11:25.826283 4805 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 21:11:25 crc kubenswrapper[4805]: E0216 21:11:25.826377 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-memberlist podName:a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289 nodeName:}" failed. No retries permitted until 2026-02-16 21:11:26.826359188 +0000 UTC m=+904.645042483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-memberlist") pod "speaker-9ttm2" (UID: "a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289") : secret "metallb-memberlist" not found Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.833630 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-metrics-certs\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.855325 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb"] Feb 16 21:11:25 crc kubenswrapper[4805]: W0216 21:11:25.973495 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf2d01e8_01e1_48db_96d7_1ef79c926d5a.slice/crio-2667ceb2ba532f76130b5e46b2d37988d74f663a1c1ec8a4c9a1c4c5370fdb3d WatchSource:0}: Error finding container 2667ceb2ba532f76130b5e46b2d37988d74f663a1c1ec8a4c9a1c4c5370fdb3d: Status 404 returned error can't find the container with id 2667ceb2ba532f76130b5e46b2d37988d74f663a1c1ec8a4c9a1c4c5370fdb3d Feb 16 21:11:25 crc kubenswrapper[4805]: I0216 21:11:25.975302 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-bbdrb"] Feb 16 21:11:26 crc kubenswrapper[4805]: I0216 21:11:26.245006 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-bbdrb" event={"ID":"df2d01e8-01e1-48db-96d7-1ef79c926d5a","Type":"ContainerStarted","Data":"d61738c837f5c9462574eda662c63e6c171adb6c31cdbaffe027b04c82d3c9f6"} Feb 16 21:11:26 crc kubenswrapper[4805]: I0216 21:11:26.245048 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-bbdrb" event={"ID":"df2d01e8-01e1-48db-96d7-1ef79c926d5a","Type":"ContainerStarted","Data":"b50643f8b6ccb41b3fd1534b6efa1960eda66f5183890d342d6d75144e274835"} Feb 16 21:11:26 crc kubenswrapper[4805]: I0216 21:11:26.245059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-bbdrb" event={"ID":"df2d01e8-01e1-48db-96d7-1ef79c926d5a","Type":"ContainerStarted","Data":"2667ceb2ba532f76130b5e46b2d37988d74f663a1c1ec8a4c9a1c4c5370fdb3d"} Feb 16 21:11:26 crc kubenswrapper[4805]: I0216 21:11:26.245219 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:26 crc kubenswrapper[4805]: I0216 21:11:26.247031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerStarted","Data":"b1d7340688c5da0af677cdf3f5784ac6caa190b409179550e556c64d1cbb2411"} Feb 16 21:11:26 crc kubenswrapper[4805]: I0216 21:11:26.248073 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" event={"ID":"0b9f819d-da9a-4b13-b0fb-70e11f25fb3f","Type":"ContainerStarted","Data":"6c27a9292175fc4ddca08a97e784d2b77fdc63333473ae054b436bd22cca02f7"} Feb 16 21:11:26 crc kubenswrapper[4805]: I0216 21:11:26.278982 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-bbdrb" podStartSLOduration=1.278961293 podStartE2EDuration="1.278961293s" podCreationTimestamp="2026-02-16 21:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:11:26.260167735 +0000 UTC m=+904.078851030" watchObservedRunningTime="2026-02-16 21:11:26.278961293 +0000 UTC m=+904.097644598" Feb 16 21:11:26 crc kubenswrapper[4805]: I0216 21:11:26.845551 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-memberlist\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:26 crc kubenswrapper[4805]: I0216 21:11:26.852866 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289-memberlist\") pod \"speaker-9ttm2\" (UID: \"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289\") " pod="metallb-system/speaker-9ttm2" Feb 16 21:11:27 crc kubenswrapper[4805]: I0216 21:11:27.025710 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9ttm2" Feb 16 21:11:27 crc kubenswrapper[4805]: W0216 21:11:27.059019 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5b5e1ab_7b1e_4e78_9db3_a86ba41a1289.slice/crio-a01ef71cc5ac9a16521a27559971cee3d7fc3e2ad2744e5bc70474375a6432f5 WatchSource:0}: Error finding container a01ef71cc5ac9a16521a27559971cee3d7fc3e2ad2744e5bc70474375a6432f5: Status 404 returned error can't find the container with id a01ef71cc5ac9a16521a27559971cee3d7fc3e2ad2744e5bc70474375a6432f5 Feb 16 21:11:27 crc kubenswrapper[4805]: I0216 21:11:27.255126 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9ttm2" event={"ID":"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289","Type":"ContainerStarted","Data":"a01ef71cc5ac9a16521a27559971cee3d7fc3e2ad2744e5bc70474375a6432f5"} Feb 16 21:11:28 crc kubenswrapper[4805]: I0216 21:11:28.266384 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9ttm2" event={"ID":"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289","Type":"ContainerStarted","Data":"80ece4361d7d1a7edcc04c2f60feca6fbe2bcd71f5941c34a223fcd546cde60f"} Feb 16 21:11:28 crc kubenswrapper[4805]: I0216 21:11:28.266942 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9ttm2" event={"ID":"a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289","Type":"ContainerStarted","Data":"c2b01262b8c608bd7968ed124ed15da88956d1baf23b8e160891707a6802c71a"} Feb 16 21:11:28 crc kubenswrapper[4805]: I0216 21:11:28.267596 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9ttm2" Feb 16 21:11:28 crc kubenswrapper[4805]: I0216 21:11:28.290340 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9ttm2" podStartSLOduration=3.290321471 podStartE2EDuration="3.290321471s" podCreationTimestamp="2026-02-16 21:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:11:28.289370695 +0000 UTC m=+906.108054010" watchObservedRunningTime="2026-02-16 21:11:28.290321471 +0000 UTC m=+906.109004766" Feb 16 21:11:34 crc kubenswrapper[4805]: I0216 21:11:34.322510 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" event={"ID":"0b9f819d-da9a-4b13-b0fb-70e11f25fb3f","Type":"ContainerStarted","Data":"2d68319a57c2f6031a0e7e048e34b4697491c84d512316790504670caf53c6a9"} Feb 16 21:11:34 crc kubenswrapper[4805]: I0216 21:11:34.323132 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:34 crc kubenswrapper[4805]: I0216 21:11:34.324423 4805 generic.go:334] "Generic (PLEG): container finished" podID="b729a8ff-87a7-4ed1-9af8-d2da4849e89c" containerID="0f98cb7ee78baddbe66747087551714959230ac6f0790f22f2aafb04264e927e" exitCode=0 Feb 16 21:11:34 crc kubenswrapper[4805]: I0216 21:11:34.324534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerDied","Data":"0f98cb7ee78baddbe66747087551714959230ac6f0790f22f2aafb04264e927e"} Feb 16 21:11:34 crc kubenswrapper[4805]: I0216 21:11:34.388545 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" podStartSLOduration=1.959842307 podStartE2EDuration="9.388525224s" podCreationTimestamp="2026-02-16 21:11:25 +0000 UTC" firstStartedPulling="2026-02-16 21:11:25.866368025 +0000 UTC m=+903.685051320" lastFinishedPulling="2026-02-16 21:11:33.295050932 +0000 UTC m=+911.113734237" observedRunningTime="2026-02-16 21:11:34.357174229 +0000 UTC m=+912.175857544" watchObservedRunningTime="2026-02-16 21:11:34.388525224 +0000 UTC m=+912.207208529" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.332202 4805 generic.go:334] "Generic (PLEG): container finished" podID="b729a8ff-87a7-4ed1-9af8-d2da4849e89c" containerID="dabb656928594682c60cb5149b8823acd945b0bc8bdd4b08b7fb0ba01ac0aaae" exitCode=0 Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.332237 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerDied","Data":"dabb656928594682c60cb5149b8823acd945b0bc8bdd4b08b7fb0ba01ac0aaae"} Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.569828 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-td7v8"] Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.573657 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.581182 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-td7v8"] Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.725364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-utilities\") pod \"redhat-marketplace-td7v8\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.725449 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-catalog-content\") pod \"redhat-marketplace-td7v8\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.725873 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2zw\" (UniqueName: \"kubernetes.io/projected/ca5f2389-32be-41b0-9f24-2954901f2cab-kube-api-access-ql2zw\") pod \"redhat-marketplace-td7v8\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.827701 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql2zw\" (UniqueName: \"kubernetes.io/projected/ca5f2389-32be-41b0-9f24-2954901f2cab-kube-api-access-ql2zw\") pod \"redhat-marketplace-td7v8\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.828176 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-utilities\") pod \"redhat-marketplace-td7v8\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.828222 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-catalog-content\") pod \"redhat-marketplace-td7v8\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.828593 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-utilities\") pod \"redhat-marketplace-td7v8\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.828661 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-catalog-content\") pod \"redhat-marketplace-td7v8\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.844901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql2zw\" (UniqueName: \"kubernetes.io/projected/ca5f2389-32be-41b0-9f24-2954901f2cab-kube-api-access-ql2zw\") pod \"redhat-marketplace-td7v8\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:35 crc kubenswrapper[4805]: I0216 21:11:35.903291 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:36 crc kubenswrapper[4805]: I0216 21:11:36.343183 4805 generic.go:334] "Generic (PLEG): container finished" podID="b729a8ff-87a7-4ed1-9af8-d2da4849e89c" containerID="bc069fae6d6614ed08007390a13fcf1b0afcfd4948c1163aedabb3a99f6f02f5" exitCode=0 Feb 16 21:11:36 crc kubenswrapper[4805]: I0216 21:11:36.343236 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerDied","Data":"bc069fae6d6614ed08007390a13fcf1b0afcfd4948c1163aedabb3a99f6f02f5"} Feb 16 21:11:36 crc kubenswrapper[4805]: I0216 21:11:36.355461 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-td7v8"] Feb 16 21:11:37 crc kubenswrapper[4805]: I0216 21:11:37.031714 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9ttm2" Feb 16 21:11:37 crc kubenswrapper[4805]: I0216 21:11:37.357284 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerID="f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95" exitCode=0 Feb 16 21:11:37 crc kubenswrapper[4805]: I0216 21:11:37.357406 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-td7v8" event={"ID":"ca5f2389-32be-41b0-9f24-2954901f2cab","Type":"ContainerDied","Data":"f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95"} Feb 16 21:11:37 crc kubenswrapper[4805]: I0216 21:11:37.357453 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-td7v8" event={"ID":"ca5f2389-32be-41b0-9f24-2954901f2cab","Type":"ContainerStarted","Data":"13dfba9d6866d5a69bbf3ae7ac6263c52329aeb7c45367d90faf9a7d4eb048ee"} Feb 16 21:11:37 crc kubenswrapper[4805]: I0216 21:11:37.374349 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerStarted","Data":"3a16ba74ff02397739768d83c4a7775136e97bdf080ba233b7bbe22fae65a89b"} Feb 16 21:11:37 crc kubenswrapper[4805]: I0216 21:11:37.374499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerStarted","Data":"eb4b52f9861f117c917b6204a5e1bf445e86838fb7893d05514a59154a821aa0"} Feb 16 21:11:37 crc kubenswrapper[4805]: I0216 21:11:37.374529 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerStarted","Data":"88883fb119a8ca7fcbe13c45a403e9e0c98a6345f04297e6a28f6e9c18f0348b"} Feb 16 21:11:37 crc kubenswrapper[4805]: I0216 21:11:37.374547 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerStarted","Data":"793711bb557f2620cb7530843dd99ba4e8c9f85fe301f04bcce93634a32785d3"} Feb 16 21:11:37 crc kubenswrapper[4805]: I0216 21:11:37.374563 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerStarted","Data":"9c08736ed56996ddfcc3ddb000299f893b3e188d47f7da40ad1c2adcd2067b6a"} Feb 16 21:11:38 crc kubenswrapper[4805]: I0216 21:11:38.100043 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:11:38 crc kubenswrapper[4805]: I0216 21:11:38.100359 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:11:38 crc kubenswrapper[4805]: I0216 21:11:38.386305 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerID="fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2" exitCode=0 Feb 16 21:11:38 crc kubenswrapper[4805]: I0216 21:11:38.386372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-td7v8" event={"ID":"ca5f2389-32be-41b0-9f24-2954901f2cab","Type":"ContainerDied","Data":"fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2"} Feb 16 21:11:38 crc kubenswrapper[4805]: I0216 21:11:38.396999 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-87c7n" event={"ID":"b729a8ff-87a7-4ed1-9af8-d2da4849e89c","Type":"ContainerStarted","Data":"b26921a993fe80834c46ea9fdd3bc59cf6248f9b23515f4a81880b14c5e93559"} Feb 16 21:11:38 crc kubenswrapper[4805]: I0216 21:11:38.397598 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:39 crc kubenswrapper[4805]: I0216 21:11:39.409331 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-td7v8" event={"ID":"ca5f2389-32be-41b0-9f24-2954901f2cab","Type":"ContainerStarted","Data":"daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34"} Feb 16 21:11:39 crc kubenswrapper[4805]: I0216 21:11:39.430300 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-87c7n" podStartSLOduration=6.982343478 podStartE2EDuration="14.430280917s" podCreationTimestamp="2026-02-16 21:11:25 +0000 UTC" firstStartedPulling="2026-02-16 21:11:25.874320684 +0000 UTC m=+903.693003979" lastFinishedPulling="2026-02-16 21:11:33.322258113 +0000 UTC m=+911.140941418" observedRunningTime="2026-02-16 21:11:38.438042099 +0000 UTC m=+916.256725434" watchObservedRunningTime="2026-02-16 21:11:39.430280917 +0000 UTC m=+917.248964212" Feb 16 21:11:39 crc kubenswrapper[4805]: I0216 21:11:39.432587 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-td7v8" podStartSLOduration=2.7799006630000003 podStartE2EDuration="4.432577931s" podCreationTimestamp="2026-02-16 21:11:35 +0000 UTC" firstStartedPulling="2026-02-16 21:11:37.36098081 +0000 UTC m=+915.179664145" lastFinishedPulling="2026-02-16 21:11:39.013658118 +0000 UTC m=+916.832341413" observedRunningTime="2026-02-16 21:11:39.426852513 +0000 UTC m=+917.245535808" watchObservedRunningTime="2026-02-16 21:11:39.432577931 +0000 UTC m=+917.251261226" Feb 16 21:11:40 crc kubenswrapper[4805]: I0216 21:11:40.734848 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:40 crc kubenswrapper[4805]: I0216 21:11:40.783104 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.149684 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mftrj"] Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.153900 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.170189 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mftrj"] Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.337525 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbcq\" (UniqueName: \"kubernetes.io/projected/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-kube-api-access-grbcq\") pod \"community-operators-mftrj\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.338154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-catalog-content\") pod \"community-operators-mftrj\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.338342 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-utilities\") pod \"community-operators-mftrj\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.440213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbcq\" (UniqueName: \"kubernetes.io/projected/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-kube-api-access-grbcq\") pod \"community-operators-mftrj\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.440295 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-catalog-content\") pod \"community-operators-mftrj\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.440353 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-utilities\") pod \"community-operators-mftrj\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.440929 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-utilities\") pod \"community-operators-mftrj\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.441504 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-catalog-content\") pod \"community-operators-mftrj\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.464920 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbcq\" (UniqueName: \"kubernetes.io/projected/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-kube-api-access-grbcq\") pod \"community-operators-mftrj\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:41 crc kubenswrapper[4805]: I0216 21:11:41.480445 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:42 crc kubenswrapper[4805]: I0216 21:11:42.044170 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mftrj"] Feb 16 21:11:42 crc kubenswrapper[4805]: I0216 21:11:42.435643 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerID="dd8b68dc2e127740807c80197866f01f4e527b621d4b6ac4a83c2192fb2f5de7" exitCode=0 Feb 16 21:11:42 crc kubenswrapper[4805]: I0216 21:11:42.435716 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mftrj" event={"ID":"ca4cab4f-def6-4e0a-b666-7a8e30cc8705","Type":"ContainerDied","Data":"dd8b68dc2e127740807c80197866f01f4e527b621d4b6ac4a83c2192fb2f5de7"} Feb 16 21:11:42 crc kubenswrapper[4805]: I0216 21:11:42.435798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mftrj" event={"ID":"ca4cab4f-def6-4e0a-b666-7a8e30cc8705","Type":"ContainerStarted","Data":"033fa8ce5fbb32972c7e1137da6f3f29dec71676f5085d6c0f6e1f940cb0a15a"} Feb 16 21:11:43 crc kubenswrapper[4805]: I0216 21:11:43.446210 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerID="a99b2df77ff7b23fd766e953f184b79d014e35d26e9cde54627f35a266527c86" exitCode=0 Feb 16 21:11:43 crc kubenswrapper[4805]: I0216 21:11:43.446309 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mftrj" event={"ID":"ca4cab4f-def6-4e0a-b666-7a8e30cc8705","Type":"ContainerDied","Data":"a99b2df77ff7b23fd766e953f184b79d014e35d26e9cde54627f35a266527c86"} Feb 16 21:11:44 crc kubenswrapper[4805]: I0216 21:11:44.461661 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mftrj" event={"ID":"ca4cab4f-def6-4e0a-b666-7a8e30cc8705","Type":"ContainerStarted","Data":"d1c100a030960bf8767b33b0bdd2a4704267f5d1066c7a363aa52c34aed2ef65"} Feb 16 21:11:44 crc kubenswrapper[4805]: I0216 21:11:44.497914 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mftrj" podStartSLOduration=2.095252079 podStartE2EDuration="3.497890154s" podCreationTimestamp="2026-02-16 21:11:41 +0000 UTC" firstStartedPulling="2026-02-16 21:11:42.438566653 +0000 UTC m=+920.257249988" lastFinishedPulling="2026-02-16 21:11:43.841204728 +0000 UTC m=+921.659888063" observedRunningTime="2026-02-16 21:11:44.488915247 +0000 UTC m=+922.307598562" watchObservedRunningTime="2026-02-16 21:11:44.497890154 +0000 UTC m=+922.316573489" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.132105 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nlsn9"] Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.133434 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nlsn9" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.135824 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-s4h8j" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.135825 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.135823 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.141297 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nlsn9"] Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.304929 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvc2\" (UniqueName: \"kubernetes.io/projected/8b4db840-b551-49a7-b35b-b6a52a9e78a2-kube-api-access-rnvc2\") pod \"openstack-operator-index-nlsn9\" (UID: \"8b4db840-b551-49a7-b35b-b6a52a9e78a2\") " pod="openstack-operators/openstack-operator-index-nlsn9" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.406784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnvc2\" (UniqueName: \"kubernetes.io/projected/8b4db840-b551-49a7-b35b-b6a52a9e78a2-kube-api-access-rnvc2\") pod \"openstack-operator-index-nlsn9\" (UID: \"8b4db840-b551-49a7-b35b-b6a52a9e78a2\") " pod="openstack-operators/openstack-operator-index-nlsn9" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.429010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnvc2\" (UniqueName: \"kubernetes.io/projected/8b4db840-b551-49a7-b35b-b6a52a9e78a2-kube-api-access-rnvc2\") pod \"openstack-operator-index-nlsn9\" (UID: \"8b4db840-b551-49a7-b35b-b6a52a9e78a2\") " pod="openstack-operators/openstack-operator-index-nlsn9" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.456239 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nlsn9" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.463324 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7spzb" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.555166 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-bbdrb" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.903866 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.903967 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.964186 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:45 crc kubenswrapper[4805]: I0216 21:11:45.964269 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nlsn9"] Feb 16 21:11:46 crc kubenswrapper[4805]: I0216 21:11:46.482402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nlsn9" event={"ID":"8b4db840-b551-49a7-b35b-b6a52a9e78a2","Type":"ContainerStarted","Data":"85a06f9473dcfa67df8e51fdf3e9e81f9181f36fb42c81adb4bb7e4088b351cf"} Feb 16 21:11:46 crc kubenswrapper[4805]: I0216 21:11:46.552294 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:49 crc kubenswrapper[4805]: I0216 21:11:49.728585 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-td7v8"] Feb 16 21:11:49 crc kubenswrapper[4805]: I0216 21:11:49.729368 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-td7v8" podUID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerName="registry-server" containerID="cri-o://daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34" gracePeriod=2 Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.232565 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.397440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-utilities\") pod \"ca5f2389-32be-41b0-9f24-2954901f2cab\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.397924 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-catalog-content\") pod \"ca5f2389-32be-41b0-9f24-2954901f2cab\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.398284 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql2zw\" (UniqueName: \"kubernetes.io/projected/ca5f2389-32be-41b0-9f24-2954901f2cab-kube-api-access-ql2zw\") pod \"ca5f2389-32be-41b0-9f24-2954901f2cab\" (UID: \"ca5f2389-32be-41b0-9f24-2954901f2cab\") " Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.399120 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-utilities" (OuterVolumeSpecName: "utilities") pod "ca5f2389-32be-41b0-9f24-2954901f2cab" (UID: "ca5f2389-32be-41b0-9f24-2954901f2cab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.409664 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca5f2389-32be-41b0-9f24-2954901f2cab-kube-api-access-ql2zw" (OuterVolumeSpecName: "kube-api-access-ql2zw") pod "ca5f2389-32be-41b0-9f24-2954901f2cab" (UID: "ca5f2389-32be-41b0-9f24-2954901f2cab"). InnerVolumeSpecName "kube-api-access-ql2zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.440801 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca5f2389-32be-41b0-9f24-2954901f2cab" (UID: "ca5f2389-32be-41b0-9f24-2954901f2cab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.500801 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.500845 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql2zw\" (UniqueName: \"kubernetes.io/projected/ca5f2389-32be-41b0-9f24-2954901f2cab-kube-api-access-ql2zw\") on node \"crc\" DevicePath \"\"" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.500861 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5f2389-32be-41b0-9f24-2954901f2cab-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.529133 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerID="daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34" exitCode=0 Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.529415 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-td7v8" event={"ID":"ca5f2389-32be-41b0-9f24-2954901f2cab","Type":"ContainerDied","Data":"daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34"} Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.529527 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-td7v8" event={"ID":"ca5f2389-32be-41b0-9f24-2954901f2cab","Type":"ContainerDied","Data":"13dfba9d6866d5a69bbf3ae7ac6263c52329aeb7c45367d90faf9a7d4eb048ee"} Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.529630 4805 scope.go:117] "RemoveContainer" containerID="daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.529865 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-td7v8" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.543525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nlsn9" event={"ID":"8b4db840-b551-49a7-b35b-b6a52a9e78a2","Type":"ContainerStarted","Data":"ff4f0e531aeb2c2db4ece5fa5565d339150136123a7bde58adcdfb7f799fa9f6"} Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.566708 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nlsn9" podStartSLOduration=1.865425254 podStartE2EDuration="5.566685786s" podCreationTimestamp="2026-02-16 21:11:45 +0000 UTC" firstStartedPulling="2026-02-16 21:11:45.974175723 +0000 UTC m=+923.792859018" lastFinishedPulling="2026-02-16 21:11:49.675436245 +0000 UTC m=+927.494119550" observedRunningTime="2026-02-16 21:11:50.559299623 +0000 UTC m=+928.377982928" watchObservedRunningTime="2026-02-16 21:11:50.566685786 +0000 UTC m=+928.385369091" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.573694 4805 scope.go:117] "RemoveContainer" containerID="fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.592765 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-td7v8"] Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.594932 4805 scope.go:117] "RemoveContainer" containerID="f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.600126 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-td7v8"] Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.617295 4805 scope.go:117] "RemoveContainer" containerID="daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34" Feb 16 21:11:50 crc kubenswrapper[4805]: E0216 21:11:50.617673 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34\": container with ID starting with daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34 not found: ID does not exist" containerID="daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.617706 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34"} err="failed to get container status \"daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34\": rpc error: code = NotFound desc = could not find container \"daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34\": container with ID starting with daa189460af2be642db3c3eb4a06b8fbe145d1e44f99e592c42963567125ac34 not found: ID does not exist" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.617750 4805 scope.go:117] "RemoveContainer" containerID="fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2" Feb 16 21:11:50 crc kubenswrapper[4805]: E0216 21:11:50.618141 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2\": container with ID starting with fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2 not found: ID does not exist" containerID="fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.618186 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2"} err="failed to get container status \"fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2\": rpc error: code = NotFound desc = could not find container \"fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2\": container with ID starting with fe1ec92b53cc37dfd42966faa0bd012ebde8ff370512c87ee17b015113eea9c2 not found: ID does not exist" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.618211 4805 scope.go:117] "RemoveContainer" containerID="f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95" Feb 16 21:11:50 crc kubenswrapper[4805]: E0216 21:11:50.618970 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95\": container with ID starting with f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95 not found: ID does not exist" containerID="f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95" Feb 16 21:11:50 crc kubenswrapper[4805]: I0216 21:11:50.619005 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95"} err="failed to get container status \"f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95\": rpc error: code = NotFound desc = could not find container \"f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95\": container with ID starting with f0c31e13b6b91fd59756c98706b015a88b91976517c235279b606867bf6caa95 not found: ID does not exist" Feb 16 21:11:51 crc kubenswrapper[4805]: I0216 21:11:51.481056 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:51 crc kubenswrapper[4805]: I0216 21:11:51.481442 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:51 crc kubenswrapper[4805]: I0216 21:11:51.526494 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:51 crc kubenswrapper[4805]: I0216 21:11:51.611999 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca5f2389-32be-41b0-9f24-2954901f2cab" path="/var/lib/kubelet/pods/ca5f2389-32be-41b0-9f24-2954901f2cab/volumes" Feb 16 21:11:51 crc kubenswrapper[4805]: I0216 21:11:51.616124 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:11:52 crc kubenswrapper[4805]: I0216 21:11:52.939540 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-smkdm"] Feb 16 21:11:52 crc kubenswrapper[4805]: E0216 21:11:52.940335 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerName="extract-utilities" Feb 16 21:11:52 crc kubenswrapper[4805]: I0216 21:11:52.940356 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerName="extract-utilities" Feb 16 21:11:52 crc kubenswrapper[4805]: E0216 21:11:52.940376 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerName="registry-server" Feb 16 21:11:52 crc kubenswrapper[4805]: I0216 21:11:52.940388 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerName="registry-server" Feb 16 21:11:52 crc kubenswrapper[4805]: E0216 21:11:52.940418 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerName="extract-content" Feb 16 21:11:52 crc kubenswrapper[4805]: I0216 21:11:52.940431 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerName="extract-content" Feb 16 21:11:52 crc kubenswrapper[4805]: I0216 21:11:52.940769 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca5f2389-32be-41b0-9f24-2954901f2cab" containerName="registry-server" Feb 16 21:11:52 crc kubenswrapper[4805]: I0216 21:11:52.946314 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:52 crc kubenswrapper[4805]: I0216 21:11:52.962458 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-smkdm"] Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.041932 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3-utilities\") pod \"certified-operators-smkdm\" (UID: \"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3\") " pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.042006 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgkc6\" (UniqueName: \"kubernetes.io/projected/e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3-kube-api-access-jgkc6\") pod \"certified-operators-smkdm\" (UID: \"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3\") " pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.042046 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3-catalog-content\") pod \"certified-operators-smkdm\" (UID: \"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3\") " pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.143254 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3-catalog-content\") pod \"certified-operators-smkdm\" (UID: \"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3\") " pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.143433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3-utilities\") pod \"certified-operators-smkdm\" (UID: \"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3\") " pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.143555 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgkc6\" (UniqueName: \"kubernetes.io/projected/e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3-kube-api-access-jgkc6\") pod \"certified-operators-smkdm\" (UID: \"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3\") " pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.143892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3-catalog-content\") pod \"certified-operators-smkdm\" (UID: \"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3\") " pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.144085 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3-utilities\") pod \"certified-operators-smkdm\" (UID: \"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3\") " pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.165404 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgkc6\" (UniqueName: \"kubernetes.io/projected/e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3-kube-api-access-jgkc6\") pod \"certified-operators-smkdm\" (UID: \"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3\") " pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.324636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:11:53 crc kubenswrapper[4805]: I0216 21:11:53.788320 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-smkdm"] Feb 16 21:11:54 crc kubenswrapper[4805]: I0216 21:11:54.600241 4805 generic.go:334] "Generic (PLEG): container finished" podID="e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3" containerID="60aa6799baab8ab7e3a8548638e0a02e7a598e97c263f87392be05f3ea3a1131" exitCode=0 Feb 16 21:11:54 crc kubenswrapper[4805]: I0216 21:11:54.600294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smkdm" event={"ID":"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3","Type":"ContainerDied","Data":"60aa6799baab8ab7e3a8548638e0a02e7a598e97c263f87392be05f3ea3a1131"} Feb 16 21:11:54 crc kubenswrapper[4805]: I0216 21:11:54.600649 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smkdm" event={"ID":"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3","Type":"ContainerStarted","Data":"5547d33588c9746af5e284f784a70382d20153640df10db7cac8abed64a7db1a"} Feb 16 21:11:55 crc kubenswrapper[4805]: I0216 21:11:55.456696 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nlsn9" Feb 16 21:11:55 crc kubenswrapper[4805]: I0216 21:11:55.457081 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nlsn9" Feb 16 21:11:55 crc kubenswrapper[4805]: I0216 21:11:55.489542 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nlsn9" Feb 16 21:11:55 crc kubenswrapper[4805]: I0216 21:11:55.634196 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nlsn9" Feb 16 21:11:55 crc kubenswrapper[4805]: I0216 21:11:55.736853 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-87c7n" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.370193 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz"] Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.378065 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.382088 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gr2hw" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.389824 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz"] Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.516482 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-util\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.516875 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqwbg\" (UniqueName: \"kubernetes.io/projected/3110bc98-6c48-4dac-a96d-14ab481061c1-kube-api-access-rqwbg\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.516976 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-bundle\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.618373 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-util\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.618841 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-util\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.618961 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqwbg\" (UniqueName: \"kubernetes.io/projected/3110bc98-6c48-4dac-a96d-14ab481061c1-kube-api-access-rqwbg\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.619055 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-bundle\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.619405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-bundle\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.654807 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqwbg\" (UniqueName: \"kubernetes.io/projected/3110bc98-6c48-4dac-a96d-14ab481061c1-kube-api-access-rqwbg\") pod \"d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.702940 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.926314 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mftrj"] Feb 16 21:11:57 crc kubenswrapper[4805]: I0216 21:11:57.926699 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mftrj" podUID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerName="registry-server" containerID="cri-o://d1c100a030960bf8767b33b0bdd2a4704267f5d1066c7a363aa52c34aed2ef65" gracePeriod=2 Feb 16 21:11:58 crc kubenswrapper[4805]: I0216 21:11:58.640251 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerID="d1c100a030960bf8767b33b0bdd2a4704267f5d1066c7a363aa52c34aed2ef65" exitCode=0 Feb 16 21:11:58 crc kubenswrapper[4805]: I0216 21:11:58.640334 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mftrj" event={"ID":"ca4cab4f-def6-4e0a-b666-7a8e30cc8705","Type":"ContainerDied","Data":"d1c100a030960bf8767b33b0bdd2a4704267f5d1066c7a363aa52c34aed2ef65"} Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.126551 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.259687 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grbcq\" (UniqueName: \"kubernetes.io/projected/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-kube-api-access-grbcq\") pod \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.261746 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-utilities\") pod \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.262156 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-catalog-content\") pod \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\" (UID: \"ca4cab4f-def6-4e0a-b666-7a8e30cc8705\") " Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.262641 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-utilities" (OuterVolumeSpecName: "utilities") pod "ca4cab4f-def6-4e0a-b666-7a8e30cc8705" (UID: "ca4cab4f-def6-4e0a-b666-7a8e30cc8705"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.263025 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.265323 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-kube-api-access-grbcq" (OuterVolumeSpecName: "kube-api-access-grbcq") pod "ca4cab4f-def6-4e0a-b666-7a8e30cc8705" (UID: "ca4cab4f-def6-4e0a-b666-7a8e30cc8705"). InnerVolumeSpecName "kube-api-access-grbcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.314106 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca4cab4f-def6-4e0a-b666-7a8e30cc8705" (UID: "ca4cab4f-def6-4e0a-b666-7a8e30cc8705"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:00 crc kubenswrapper[4805]: W0216 21:12:00.315401 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3110bc98_6c48_4dac_a96d_14ab481061c1.slice/crio-a976daeade29b86b664dc3b13965a85c63656f058a8215e171ceb2efa9dcae19 WatchSource:0}: Error finding container a976daeade29b86b664dc3b13965a85c63656f058a8215e171ceb2efa9dcae19: Status 404 returned error can't find the container with id a976daeade29b86b664dc3b13965a85c63656f058a8215e171ceb2efa9dcae19 Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.322166 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz"] Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.365096 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.365127 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grbcq\" (UniqueName: \"kubernetes.io/projected/ca4cab4f-def6-4e0a-b666-7a8e30cc8705-kube-api-access-grbcq\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.657169 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mftrj" event={"ID":"ca4cab4f-def6-4e0a-b666-7a8e30cc8705","Type":"ContainerDied","Data":"033fa8ce5fbb32972c7e1137da6f3f29dec71676f5085d6c0f6e1f940cb0a15a"} Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.657189 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mftrj" Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.657419 4805 scope.go:117] "RemoveContainer" containerID="d1c100a030960bf8767b33b0bdd2a4704267f5d1066c7a363aa52c34aed2ef65" Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.659342 4805 generic.go:334] "Generic (PLEG): container finished" podID="e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3" containerID="45c66d8c076d19c59efde844b9d2a60aac7dd35a08c68b653617237cced7fbc1" exitCode=0 Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.659435 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smkdm" event={"ID":"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3","Type":"ContainerDied","Data":"45c66d8c076d19c59efde844b9d2a60aac7dd35a08c68b653617237cced7fbc1"} Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.662702 4805 generic.go:334] "Generic (PLEG): container finished" podID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerID="7a8e57d67e28aa26ce5df29f181f30adc7c0fbd3f5e0a9f291884dc9013bae2c" exitCode=0 Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.662749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" event={"ID":"3110bc98-6c48-4dac-a96d-14ab481061c1","Type":"ContainerDied","Data":"7a8e57d67e28aa26ce5df29f181f30adc7c0fbd3f5e0a9f291884dc9013bae2c"} Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.662775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" event={"ID":"3110bc98-6c48-4dac-a96d-14ab481061c1","Type":"ContainerStarted","Data":"a976daeade29b86b664dc3b13965a85c63656f058a8215e171ceb2efa9dcae19"} Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.710630 4805 scope.go:117] "RemoveContainer" containerID="a99b2df77ff7b23fd766e953f184b79d014e35d26e9cde54627f35a266527c86" Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.725004 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mftrj"] Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.730100 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mftrj"] Feb 16 21:12:00 crc kubenswrapper[4805]: I0216 21:12:00.742095 4805 scope.go:117] "RemoveContainer" containerID="dd8b68dc2e127740807c80197866f01f4e527b621d4b6ac4a83c2192fb2f5de7" Feb 16 21:12:01 crc kubenswrapper[4805]: I0216 21:12:01.608339 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" path="/var/lib/kubelet/pods/ca4cab4f-def6-4e0a-b666-7a8e30cc8705/volumes" Feb 16 21:12:01 crc kubenswrapper[4805]: I0216 21:12:01.672086 4805 generic.go:334] "Generic (PLEG): container finished" podID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerID="edf18bc2fbcf1e71a0f2b1f9c0a00603aed89cba26a94cb33b82b7700c1bc6a5" exitCode=0 Feb 16 21:12:01 crc kubenswrapper[4805]: I0216 21:12:01.672157 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" event={"ID":"3110bc98-6c48-4dac-a96d-14ab481061c1","Type":"ContainerDied","Data":"edf18bc2fbcf1e71a0f2b1f9c0a00603aed89cba26a94cb33b82b7700c1bc6a5"} Feb 16 21:12:01 crc kubenswrapper[4805]: I0216 21:12:01.676111 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smkdm" event={"ID":"e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3","Type":"ContainerStarted","Data":"ed3223fbfe43546122400fb390d698bf5770c6d77d767a7c95acc5745c5dcc83"} Feb 16 21:12:01 crc kubenswrapper[4805]: I0216 21:12:01.719903 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-smkdm" podStartSLOduration=3.218161528 podStartE2EDuration="9.719886969s" podCreationTimestamp="2026-02-16 21:11:52 +0000 UTC" firstStartedPulling="2026-02-16 21:11:54.602975747 +0000 UTC m=+932.421659052" lastFinishedPulling="2026-02-16 21:12:01.104701188 +0000 UTC m=+938.923384493" observedRunningTime="2026-02-16 21:12:01.713542223 +0000 UTC m=+939.532225518" watchObservedRunningTime="2026-02-16 21:12:01.719886969 +0000 UTC m=+939.538570264" Feb 16 21:12:02 crc kubenswrapper[4805]: I0216 21:12:02.708485 4805 generic.go:334] "Generic (PLEG): container finished" podID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerID="16b5378b1c5e9bec708b2e01bc7b5668ccb5d1797d1d4da1c0b4d460ac844039" exitCode=0 Feb 16 21:12:02 crc kubenswrapper[4805]: I0216 21:12:02.709538 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" event={"ID":"3110bc98-6c48-4dac-a96d-14ab481061c1","Type":"ContainerDied","Data":"16b5378b1c5e9bec708b2e01bc7b5668ccb5d1797d1d4da1c0b4d460ac844039"} Feb 16 21:12:03 crc kubenswrapper[4805]: I0216 21:12:03.325372 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:12:03 crc kubenswrapper[4805]: I0216 21:12:03.325447 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:12:03 crc kubenswrapper[4805]: I0216 21:12:03.373778 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.174257 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.347607 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-bundle\") pod \"3110bc98-6c48-4dac-a96d-14ab481061c1\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.347903 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqwbg\" (UniqueName: \"kubernetes.io/projected/3110bc98-6c48-4dac-a96d-14ab481061c1-kube-api-access-rqwbg\") pod \"3110bc98-6c48-4dac-a96d-14ab481061c1\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.348050 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-util\") pod \"3110bc98-6c48-4dac-a96d-14ab481061c1\" (UID: \"3110bc98-6c48-4dac-a96d-14ab481061c1\") " Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.354183 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-bundle" (OuterVolumeSpecName: "bundle") pod "3110bc98-6c48-4dac-a96d-14ab481061c1" (UID: "3110bc98-6c48-4dac-a96d-14ab481061c1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.362021 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3110bc98-6c48-4dac-a96d-14ab481061c1-kube-api-access-rqwbg" (OuterVolumeSpecName: "kube-api-access-rqwbg") pod "3110bc98-6c48-4dac-a96d-14ab481061c1" (UID: "3110bc98-6c48-4dac-a96d-14ab481061c1"). InnerVolumeSpecName "kube-api-access-rqwbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.362202 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-util" (OuterVolumeSpecName: "util") pod "3110bc98-6c48-4dac-a96d-14ab481061c1" (UID: "3110bc98-6c48-4dac-a96d-14ab481061c1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.451151 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.451184 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqwbg\" (UniqueName: \"kubernetes.io/projected/3110bc98-6c48-4dac-a96d-14ab481061c1-kube-api-access-rqwbg\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.451195 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3110bc98-6c48-4dac-a96d-14ab481061c1-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.731869 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" event={"ID":"3110bc98-6c48-4dac-a96d-14ab481061c1","Type":"ContainerDied","Data":"a976daeade29b86b664dc3b13965a85c63656f058a8215e171ceb2efa9dcae19"} Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.731929 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz" Feb 16 21:12:04 crc kubenswrapper[4805]: I0216 21:12:04.731936 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a976daeade29b86b664dc3b13965a85c63656f058a8215e171ceb2efa9dcae19" Feb 16 21:12:08 crc kubenswrapper[4805]: I0216 21:12:08.102229 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:12:08 crc kubenswrapper[4805]: I0216 21:12:08.102800 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:12:08 crc kubenswrapper[4805]: I0216 21:12:08.102852 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:12:08 crc kubenswrapper[4805]: I0216 21:12:08.103631 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e746550f7cf0d50be9739ce7e97b17ef93c5c8ee315aa0d1535183b0c6cfe9db"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:12:08 crc kubenswrapper[4805]: I0216 21:12:08.103700 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://e746550f7cf0d50be9739ce7e97b17ef93c5c8ee315aa0d1535183b0c6cfe9db" gracePeriod=600 Feb 16 21:12:08 crc kubenswrapper[4805]: I0216 21:12:08.779503 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="e746550f7cf0d50be9739ce7e97b17ef93c5c8ee315aa0d1535183b0c6cfe9db" exitCode=0 Feb 16 21:12:08 crc kubenswrapper[4805]: I0216 21:12:08.779599 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"e746550f7cf0d50be9739ce7e97b17ef93c5c8ee315aa0d1535183b0c6cfe9db"} Feb 16 21:12:08 crc kubenswrapper[4805]: I0216 21:12:08.780292 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"5f1616af32f423ba92145c911bf150c6fe834753890981f8e09fc4faccf82ee6"} Feb 16 21:12:08 crc kubenswrapper[4805]: I0216 21:12:08.780370 4805 scope.go:117] "RemoveContainer" containerID="2ea6a527da3d45efcd7fbad2ab314c9a6cf5f646dedd04c29a2b897c9c0a84d1" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.411447 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g"] Feb 16 21:12:13 crc kubenswrapper[4805]: E0216 21:12:13.412339 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerName="extract-content" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.412355 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerName="extract-content" Feb 16 21:12:13 crc kubenswrapper[4805]: E0216 21:12:13.412382 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerName="registry-server" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.412391 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerName="registry-server" Feb 16 21:12:13 crc kubenswrapper[4805]: E0216 21:12:13.412420 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerName="util" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.412429 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerName="util" Feb 16 21:12:13 crc kubenswrapper[4805]: E0216 21:12:13.412442 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerName="pull" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.412451 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerName="pull" Feb 16 21:12:13 crc kubenswrapper[4805]: E0216 21:12:13.412462 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerName="extract" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.412471 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerName="extract" Feb 16 21:12:13 crc kubenswrapper[4805]: E0216 21:12:13.412486 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerName="extract-utilities" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.412494 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerName="extract-utilities" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.412677 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4cab4f-def6-4e0a-b666-7a8e30cc8705" containerName="registry-server" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.412694 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3110bc98-6c48-4dac-a96d-14ab481061c1" containerName="extract" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.413407 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.417526 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-smkdm" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.418592 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-97xk5" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.455188 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g"] Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.538738 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rl68\" (UniqueName: \"kubernetes.io/projected/6cead8ad-a49a-4a9c-9491-99ec351f9bbe-kube-api-access-6rl68\") pod \"openstack-operator-controller-init-7dd97cff99-jm66g\" (UID: \"6cead8ad-a49a-4a9c-9491-99ec351f9bbe\") " pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.640320 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rl68\" (UniqueName: \"kubernetes.io/projected/6cead8ad-a49a-4a9c-9491-99ec351f9bbe-kube-api-access-6rl68\") pod \"openstack-operator-controller-init-7dd97cff99-jm66g\" (UID: \"6cead8ad-a49a-4a9c-9491-99ec351f9bbe\") " pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.668712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rl68\" (UniqueName: \"kubernetes.io/projected/6cead8ad-a49a-4a9c-9491-99ec351f9bbe-kube-api-access-6rl68\") pod \"openstack-operator-controller-init-7dd97cff99-jm66g\" (UID: \"6cead8ad-a49a-4a9c-9491-99ec351f9bbe\") " pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" Feb 16 21:12:13 crc kubenswrapper[4805]: I0216 21:12:13.732575 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" Feb 16 21:12:14 crc kubenswrapper[4805]: I0216 21:12:14.211898 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g"] Feb 16 21:12:14 crc kubenswrapper[4805]: I0216 21:12:14.850392 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" event={"ID":"6cead8ad-a49a-4a9c-9491-99ec351f9bbe","Type":"ContainerStarted","Data":"912665010c157b9d7e21973f635664d6f17a42ab4f48af60b1e540a0b874cadf"} Feb 16 21:12:15 crc kubenswrapper[4805]: I0216 21:12:15.572077 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-smkdm"] Feb 16 21:12:15 crc kubenswrapper[4805]: I0216 21:12:15.931752 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pmfv7"] Feb 16 21:12:15 crc kubenswrapper[4805]: I0216 21:12:15.932391 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pmfv7" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerName="registry-server" containerID="cri-o://734fb17a148b8676e9f140553ba5f1af605fd35f9d3463f68cc459f4a535d0e5" gracePeriod=2 Feb 16 21:12:16 crc kubenswrapper[4805]: I0216 21:12:16.875553 4805 generic.go:334] "Generic (PLEG): container finished" podID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerID="734fb17a148b8676e9f140553ba5f1af605fd35f9d3463f68cc459f4a535d0e5" exitCode=0 Feb 16 21:12:16 crc kubenswrapper[4805]: I0216 21:12:16.875603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmfv7" event={"ID":"18fc0a7f-912c-4900-9bfe-9c2b5049eba4","Type":"ContainerDied","Data":"734fb17a148b8676e9f140553ba5f1af605fd35f9d3463f68cc459f4a535d0e5"} Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.224825 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.312436 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-697p2\" (UniqueName: \"kubernetes.io/projected/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-kube-api-access-697p2\") pod \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.312526 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-utilities\") pod \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.312562 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-catalog-content\") pod \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\" (UID: \"18fc0a7f-912c-4900-9bfe-9c2b5049eba4\") " Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.313890 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-utilities" (OuterVolumeSpecName: "utilities") pod "18fc0a7f-912c-4900-9bfe-9c2b5049eba4" (UID: "18fc0a7f-912c-4900-9bfe-9c2b5049eba4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.321926 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-kube-api-access-697p2" (OuterVolumeSpecName: "kube-api-access-697p2") pod "18fc0a7f-912c-4900-9bfe-9c2b5049eba4" (UID: "18fc0a7f-912c-4900-9bfe-9c2b5049eba4"). InnerVolumeSpecName "kube-api-access-697p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.381483 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18fc0a7f-912c-4900-9bfe-9c2b5049eba4" (UID: "18fc0a7f-912c-4900-9bfe-9c2b5049eba4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.414770 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-697p2\" (UniqueName: \"kubernetes.io/projected/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-kube-api-access-697p2\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.414807 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.414819 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18fc0a7f-912c-4900-9bfe-9c2b5049eba4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.896673 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" event={"ID":"6cead8ad-a49a-4a9c-9491-99ec351f9bbe","Type":"ContainerStarted","Data":"e821ffdd057bed8bc26ab5fb531cba94eff516cd748ea5dac545a9898c504fa6"} Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.896785 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.900095 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmfv7" event={"ID":"18fc0a7f-912c-4900-9bfe-9c2b5049eba4","Type":"ContainerDied","Data":"de5b55369c4d38784a3033784b9a7355ab183915b87c16d360710ba4b85ee501"} Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.900152 4805 scope.go:117] "RemoveContainer" containerID="734fb17a148b8676e9f140553ba5f1af605fd35f9d3463f68cc459f4a535d0e5" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.900161 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmfv7" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.922424 4805 scope.go:117] "RemoveContainer" containerID="510a2ef111c2b5b6c446153142f083111bc6a8ac8a81ac36c287cf4f3f59a3b5" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.979476 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" podStartSLOduration=1.6835616230000001 podStartE2EDuration="5.97945787s" podCreationTimestamp="2026-02-16 21:12:13 +0000 UTC" firstStartedPulling="2026-02-16 21:12:14.204577242 +0000 UTC m=+952.023260537" lastFinishedPulling="2026-02-16 21:12:18.500473489 +0000 UTC m=+956.319156784" observedRunningTime="2026-02-16 21:12:18.944159345 +0000 UTC m=+956.762842650" watchObservedRunningTime="2026-02-16 21:12:18.97945787 +0000 UTC m=+956.798141185" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.982917 4805 scope.go:117] "RemoveContainer" containerID="4ed74231f6a2e0f9e3ce7b1b2475b0f442caed7f109d7e99a2adcd54d19c3f6f" Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.988197 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pmfv7"] Feb 16 21:12:18 crc kubenswrapper[4805]: I0216 21:12:18.998020 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pmfv7"] Feb 16 21:12:19 crc kubenswrapper[4805]: I0216 21:12:19.614921 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" path="/var/lib/kubelet/pods/18fc0a7f-912c-4900-9bfe-9c2b5049eba4/volumes" Feb 16 21:12:23 crc kubenswrapper[4805]: I0216 21:12:23.736549 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7dd97cff99-jm66g" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.832579 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6"] Feb 16 21:12:43 crc kubenswrapper[4805]: E0216 21:12:43.833591 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerName="registry-server" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.833607 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerName="registry-server" Feb 16 21:12:43 crc kubenswrapper[4805]: E0216 21:12:43.833629 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerName="extract-utilities" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.833635 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerName="extract-utilities" Feb 16 21:12:43 crc kubenswrapper[4805]: E0216 21:12:43.833652 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerName="extract-content" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.833658 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerName="extract-content" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.833819 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="18fc0a7f-912c-4900-9bfe-9c2b5049eba4" containerName="registry-server" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.834447 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.836381 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-n4wg2" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.839126 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg"] Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.840109 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.844513 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-v6d7d" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.852989 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6"] Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.859912 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs"] Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.860959 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.869626 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kcphz" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.869957 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg"] Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.881803 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-djjn2"] Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.882710 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.893818 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-5lgpj" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.899636 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-djjn2"] Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.916921 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs"] Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.933397 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-77f85"] Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.934646 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.942064 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-v9rgd" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.961700 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2"] Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.962767 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.965000 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-km2w9" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.965878 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktnvw\" (UniqueName: \"kubernetes.io/projected/5b0862c2-4070-4639-94cc-c29e08f49bf1-kube-api-access-ktnvw\") pod \"designate-operator-controller-manager-6d8bf5c495-jtxhs\" (UID: \"5b0862c2-4070-4639-94cc-c29e08f49bf1\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.965984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjjvd\" (UniqueName: \"kubernetes.io/projected/e48823c7-c98e-447b-b539-1ce95bd2d3ba-kube-api-access-wjjvd\") pod \"cinder-operator-controller-manager-5d946d989d-dm6f6\" (UID: \"e48823c7-c98e-447b-b539-1ce95bd2d3ba\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.966021 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp678\" (UniqueName: \"kubernetes.io/projected/5bc499f8-3fb7-4e12-bb4c-1e903e0c4333-kube-api-access-vp678\") pod \"barbican-operator-controller-manager-868647ff47-cdzwg\" (UID: \"5bc499f8-3fb7-4e12-bb4c-1e903e0c4333\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.966043 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2hn\" (UniqueName: \"kubernetes.io/projected/21218328-0794-4bb6-aa02-2bb8fa48f6b9-kube-api-access-7s2hn\") pod \"glance-operator-controller-manager-77987464f4-djjn2\" (UID: \"21218328-0794-4bb6-aa02-2bb8fa48f6b9\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" Feb 16 21:12:43 crc kubenswrapper[4805]: I0216 21:12:43.983788 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-77f85"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.020122 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-s2t59"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.024162 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.029353 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.033228 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pjpg8" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.066689 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.068046 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.068142 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv8xr\" (UniqueName: \"kubernetes.io/projected/f2b71132-ee94-4b2a-ad19-ab9dde9013ef-kube-api-access-wv8xr\") pod \"heat-operator-controller-manager-69f49c598c-77f85\" (UID: \"f2b71132-ee94-4b2a-ad19-ab9dde9013ef\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.068195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjjvd\" (UniqueName: \"kubernetes.io/projected/e48823c7-c98e-447b-b539-1ce95bd2d3ba-kube-api-access-wjjvd\") pod \"cinder-operator-controller-manager-5d946d989d-dm6f6\" (UID: \"e48823c7-c98e-447b-b539-1ce95bd2d3ba\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.068231 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gktj4\" (UniqueName: \"kubernetes.io/projected/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-kube-api-access-gktj4\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.068252 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp678\" (UniqueName: \"kubernetes.io/projected/5bc499f8-3fb7-4e12-bb4c-1e903e0c4333-kube-api-access-vp678\") pod \"barbican-operator-controller-manager-868647ff47-cdzwg\" (UID: \"5bc499f8-3fb7-4e12-bb4c-1e903e0c4333\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.068276 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn4t4\" (UniqueName: \"kubernetes.io/projected/e38af58d-0049-4d9c-a834-ca048c0b171f-kube-api-access-xn4t4\") pod \"horizon-operator-controller-manager-5b9b8895d5-rwjb2\" (UID: \"e38af58d-0049-4d9c-a834-ca048c0b171f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.068295 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2hn\" (UniqueName: \"kubernetes.io/projected/21218328-0794-4bb6-aa02-2bb8fa48f6b9-kube-api-access-7s2hn\") pod \"glance-operator-controller-manager-77987464f4-djjn2\" (UID: \"21218328-0794-4bb6-aa02-2bb8fa48f6b9\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.068321 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktnvw\" (UniqueName: \"kubernetes.io/projected/5b0862c2-4070-4639-94cc-c29e08f49bf1-kube-api-access-ktnvw\") pod \"designate-operator-controller-manager-6d8bf5c495-jtxhs\" (UID: \"5b0862c2-4070-4639-94cc-c29e08f49bf1\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.068381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.072445 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gpxn7" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.077853 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.107411 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktnvw\" (UniqueName: \"kubernetes.io/projected/5b0862c2-4070-4639-94cc-c29e08f49bf1-kube-api-access-ktnvw\") pod \"designate-operator-controller-manager-6d8bf5c495-jtxhs\" (UID: \"5b0862c2-4070-4639-94cc-c29e08f49bf1\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.114914 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2hn\" (UniqueName: \"kubernetes.io/projected/21218328-0794-4bb6-aa02-2bb8fa48f6b9-kube-api-access-7s2hn\") pod \"glance-operator-controller-manager-77987464f4-djjn2\" (UID: \"21218328-0794-4bb6-aa02-2bb8fa48f6b9\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.118649 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp678\" (UniqueName: \"kubernetes.io/projected/5bc499f8-3fb7-4e12-bb4c-1e903e0c4333-kube-api-access-vp678\") pod \"barbican-operator-controller-manager-868647ff47-cdzwg\" (UID: \"5bc499f8-3fb7-4e12-bb4c-1e903e0c4333\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.132498 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjjvd\" (UniqueName: \"kubernetes.io/projected/e48823c7-c98e-447b-b539-1ce95bd2d3ba-kube-api-access-wjjvd\") pod \"cinder-operator-controller-manager-5d946d989d-dm6f6\" (UID: \"e48823c7-c98e-447b-b539-1ce95bd2d3ba\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.141438 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.154813 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.160144 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-s2t59"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.162996 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.169361 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zm47\" (UniqueName: \"kubernetes.io/projected/431105e4-6dfd-4644-ae7a-521284b98eda-kube-api-access-9zm47\") pod \"ironic-operator-controller-manager-554564d7fc-4xgqc\" (UID: \"431105e4-6dfd-4644-ae7a-521284b98eda\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.169426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv8xr\" (UniqueName: \"kubernetes.io/projected/f2b71132-ee94-4b2a-ad19-ab9dde9013ef-kube-api-access-wv8xr\") pod \"heat-operator-controller-manager-69f49c598c-77f85\" (UID: \"f2b71132-ee94-4b2a-ad19-ab9dde9013ef\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.169473 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gktj4\" (UniqueName: \"kubernetes.io/projected/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-kube-api-access-gktj4\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.169498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn4t4\" (UniqueName: \"kubernetes.io/projected/e38af58d-0049-4d9c-a834-ca048c0b171f-kube-api-access-xn4t4\") pod \"horizon-operator-controller-manager-5b9b8895d5-rwjb2\" (UID: \"e38af58d-0049-4d9c-a834-ca048c0b171f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.169621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.169752 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.169804 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert podName:a745e178-a8a5-4f2b-b9bd-ad41a35f6140 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:44.669786501 +0000 UTC m=+982.488469796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert") pod "infra-operator-controller-manager-79d975b745-s2t59" (UID: "a745e178-a8a5-4f2b-b9bd-ad41a35f6140") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.176583 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.185037 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gktj4\" (UniqueName: \"kubernetes.io/projected/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-kube-api-access-gktj4\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.188646 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn4t4\" (UniqueName: \"kubernetes.io/projected/e38af58d-0049-4d9c-a834-ca048c0b171f-kube-api-access-xn4t4\") pod \"horizon-operator-controller-manager-5b9b8895d5-rwjb2\" (UID: \"e38af58d-0049-4d9c-a834-ca048c0b171f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.188823 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.190199 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.196331 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv8xr\" (UniqueName: \"kubernetes.io/projected/f2b71132-ee94-4b2a-ad19-ab9dde9013ef-kube-api-access-wv8xr\") pod \"heat-operator-controller-manager-69f49c598c-77f85\" (UID: \"f2b71132-ee94-4b2a-ad19-ab9dde9013ef\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.200154 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.209092 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-b9m7z" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.250106 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.251178 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.253633 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-w4t8v" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.259114 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.262092 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.264081 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.265866 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-bpzht" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.273405 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.274371 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zm47\" (UniqueName: \"kubernetes.io/projected/431105e4-6dfd-4644-ae7a-521284b98eda-kube-api-access-9zm47\") pod \"ironic-operator-controller-manager-554564d7fc-4xgqc\" (UID: \"431105e4-6dfd-4644-ae7a-521284b98eda\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.274510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbjb\" (UniqueName: \"kubernetes.io/projected/6bb4da12-019d-4101-a5eb-e0c85421d029-kube-api-access-5cbjb\") pod \"keystone-operator-controller-manager-b4d948c87-dghtj\" (UID: \"6bb4da12-019d-4101-a5eb-e0c85421d029\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.281346 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.284620 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.292456 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.297172 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zm47\" (UniqueName: \"kubernetes.io/projected/431105e4-6dfd-4644-ae7a-521284b98eda-kube-api-access-9zm47\") pod \"ironic-operator-controller-manager-554564d7fc-4xgqc\" (UID: \"431105e4-6dfd-4644-ae7a-521284b98eda\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.312310 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.313333 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.316923 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8chrf" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.325369 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.326876 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.330035 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tc9k2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.348226 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.360099 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.368519 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.369651 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.372641 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b8fg4" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.375478 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw6hj\" (UniqueName: \"kubernetes.io/projected/f43b76e3-db2c-40f4-80fa-77ed9f196cf5-kube-api-access-zw6hj\") pod \"neutron-operator-controller-manager-64ddbf8bb-66xx2\" (UID: \"f43b76e3-db2c-40f4-80fa-77ed9f196cf5\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.375511 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6gn5\" (UniqueName: \"kubernetes.io/projected/8131c3df-9b2e-48f7-95c9-95a8d5ba9f69-kube-api-access-l6gn5\") pod \"nova-operator-controller-manager-567668f5cf-lzgrn\" (UID: \"8131c3df-9b2e-48f7-95c9-95a8d5ba9f69\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.375550 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbsk\" (UniqueName: \"kubernetes.io/projected/856ddca1-f396-432a-b33a-9fa0c1611e29-kube-api-access-5cbsk\") pod \"mariadb-operator-controller-manager-6994f66f48-mvwth\" (UID: \"856ddca1-f396-432a-b33a-9fa0c1611e29\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.375715 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cbjb\" (UniqueName: \"kubernetes.io/projected/6bb4da12-019d-4101-a5eb-e0c85421d029-kube-api-access-5cbjb\") pod \"keystone-operator-controller-manager-b4d948c87-dghtj\" (UID: \"6bb4da12-019d-4101-a5eb-e0c85421d029\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.375756 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp64b\" (UniqueName: \"kubernetes.io/projected/2840ffe3-d3c1-4faf-bb32-f9c17173713f-kube-api-access-dp64b\") pod \"manila-operator-controller-manager-54f6768c69-2mzjb\" (UID: \"2840ffe3-d3c1-4faf-bb32-f9c17173713f\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.387162 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.387467 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.393318 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.394335 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.403617 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cbjb\" (UniqueName: \"kubernetes.io/projected/6bb4da12-019d-4101-a5eb-e0c85421d029-kube-api-access-5cbjb\") pod \"keystone-operator-controller-manager-b4d948c87-dghtj\" (UID: \"6bb4da12-019d-4101-a5eb-e0c85421d029\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.406335 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.408098 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.414554 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.414848 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nhv4d" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.414970 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cdbff" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.417317 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.431869 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.431998 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.434306 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-tfhgc" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.435451 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.466275 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.474138 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.477909 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5pr5\" (UniqueName: \"kubernetes.io/projected/32fb9648-24e5-4073-902e-f76ea1eaa512-kube-api-access-l5pr5\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.477960 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmxff\" (UniqueName: \"kubernetes.io/projected/3205b9a4-589f-4200-9e47-a073f38397c1-kube-api-access-bmxff\") pod \"ovn-operator-controller-manager-d44cf6b75-fsr75\" (UID: \"3205b9a4-589f-4200-9e47-a073f38397c1\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.478001 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmkv\" (UniqueName: \"kubernetes.io/projected/856ad725-988a-44f6-8cb1-57ff2498e192-kube-api-access-bpmkv\") pod \"octavia-operator-controller-manager-69f8888797-kt9zs\" (UID: \"856ad725-988a-44f6-8cb1-57ff2498e192\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.478536 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw6hj\" (UniqueName: \"kubernetes.io/projected/f43b76e3-db2c-40f4-80fa-77ed9f196cf5-kube-api-access-zw6hj\") pod \"neutron-operator-controller-manager-64ddbf8bb-66xx2\" (UID: \"f43b76e3-db2c-40f4-80fa-77ed9f196cf5\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.478569 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6gn5\" (UniqueName: \"kubernetes.io/projected/8131c3df-9b2e-48f7-95c9-95a8d5ba9f69-kube-api-access-l6gn5\") pod \"nova-operator-controller-manager-567668f5cf-lzgrn\" (UID: \"8131c3df-9b2e-48f7-95c9-95a8d5ba9f69\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.478642 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cbsk\" (UniqueName: \"kubernetes.io/projected/856ddca1-f396-432a-b33a-9fa0c1611e29-kube-api-access-5cbsk\") pod \"mariadb-operator-controller-manager-6994f66f48-mvwth\" (UID: \"856ddca1-f396-432a-b33a-9fa0c1611e29\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.478711 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lbr\" (UniqueName: \"kubernetes.io/projected/745993b9-7ebe-405b-9242-a561ed40c3a7-kube-api-access-w4lbr\") pod \"placement-operator-controller-manager-8497b45c89-s54qb\" (UID: \"745993b9-7ebe-405b-9242-a561ed40c3a7\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.478887 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp64b\" (UniqueName: \"kubernetes.io/projected/2840ffe3-d3c1-4faf-bb32-f9c17173713f-kube-api-access-dp64b\") pod \"manila-operator-controller-manager-54f6768c69-2mzjb\" (UID: \"2840ffe3-d3c1-4faf-bb32-f9c17173713f\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.478954 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.481701 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.484565 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.484627 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rjlzw" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.513555 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cbsk\" (UniqueName: \"kubernetes.io/projected/856ddca1-f396-432a-b33a-9fa0c1611e29-kube-api-access-5cbsk\") pod \"mariadb-operator-controller-manager-6994f66f48-mvwth\" (UID: \"856ddca1-f396-432a-b33a-9fa0c1611e29\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.526799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6gn5\" (UniqueName: \"kubernetes.io/projected/8131c3df-9b2e-48f7-95c9-95a8d5ba9f69-kube-api-access-l6gn5\") pod \"nova-operator-controller-manager-567668f5cf-lzgrn\" (UID: \"8131c3df-9b2e-48f7-95c9-95a8d5ba9f69\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.528381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp64b\" (UniqueName: \"kubernetes.io/projected/2840ffe3-d3c1-4faf-bb32-f9c17173713f-kube-api-access-dp64b\") pod \"manila-operator-controller-manager-54f6768c69-2mzjb\" (UID: \"2840ffe3-d3c1-4faf-bb32-f9c17173713f\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.528488 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.541154 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.544052 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-2kcn9" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.552287 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw6hj\" (UniqueName: \"kubernetes.io/projected/f43b76e3-db2c-40f4-80fa-77ed9f196cf5-kube-api-access-zw6hj\") pod \"neutron-operator-controller-manager-64ddbf8bb-66xx2\" (UID: \"f43b76e3-db2c-40f4-80fa-77ed9f196cf5\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.597708 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.609104 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.613109 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.625859 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.648992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csst5\" (UniqueName: \"kubernetes.io/projected/b64a3a78-cbf6-44ce-a7f2-7955af1d6e04-kube-api-access-csst5\") pod \"swift-operator-controller-manager-68f46476f-nv9kv\" (UID: \"b64a3a78-cbf6-44ce-a7f2-7955af1d6e04\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.649071 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pr5\" (UniqueName: \"kubernetes.io/projected/32fb9648-24e5-4073-902e-f76ea1eaa512-kube-api-access-l5pr5\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.649099 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmxff\" (UniqueName: \"kubernetes.io/projected/3205b9a4-589f-4200-9e47-a073f38397c1-kube-api-access-bmxff\") pod \"ovn-operator-controller-manager-d44cf6b75-fsr75\" (UID: \"3205b9a4-589f-4200-9e47-a073f38397c1\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.649159 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmkv\" (UniqueName: \"kubernetes.io/projected/856ad725-988a-44f6-8cb1-57ff2498e192-kube-api-access-bpmkv\") pod \"octavia-operator-controller-manager-69f8888797-kt9zs\" (UID: \"856ad725-988a-44f6-8cb1-57ff2498e192\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.649225 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4c5c\" (UniqueName: \"kubernetes.io/projected/549bee15-d4bb-43c2-af22-1bdbf4e66b78-kube-api-access-j4c5c\") pod \"telemetry-operator-controller-manager-7d4dd64c87-45rd7\" (UID: \"549bee15-d4bb-43c2-af22-1bdbf4e66b78\") " pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.649408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lbr\" (UniqueName: \"kubernetes.io/projected/745993b9-7ebe-405b-9242-a561ed40c3a7-kube-api-access-w4lbr\") pod \"placement-operator-controller-manager-8497b45c89-s54qb\" (UID: \"745993b9-7ebe-405b-9242-a561ed40c3a7\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.649552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.649849 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.656680 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert podName:32fb9648-24e5-4073-902e-f76ea1eaa512 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:45.156632872 +0000 UTC m=+982.975316167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" (UID: "32fb9648-24e5-4073-902e-f76ea1eaa512") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.666176 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.677382 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-h547j"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.681406 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pr5\" (UniqueName: \"kubernetes.io/projected/32fb9648-24e5-4073-902e-f76ea1eaa512-kube-api-access-l5pr5\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.681820 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmkv\" (UniqueName: \"kubernetes.io/projected/856ad725-988a-44f6-8cb1-57ff2498e192-kube-api-access-bpmkv\") pod \"octavia-operator-controller-manager-69f8888797-kt9zs\" (UID: \"856ad725-988a-44f6-8cb1-57ff2498e192\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.681938 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.682482 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.686534 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-s7bss" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.687716 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lbr\" (UniqueName: \"kubernetes.io/projected/745993b9-7ebe-405b-9242-a561ed40c3a7-kube-api-access-w4lbr\") pod \"placement-operator-controller-manager-8497b45c89-s54qb\" (UID: \"745993b9-7ebe-405b-9242-a561ed40c3a7\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.692873 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmxff\" (UniqueName: \"kubernetes.io/projected/3205b9a4-589f-4200-9e47-a073f38397c1-kube-api-access-bmxff\") pod \"ovn-operator-controller-manager-d44cf6b75-fsr75\" (UID: \"3205b9a4-589f-4200-9e47-a073f38397c1\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.696267 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-h547j"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.715214 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.723843 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.724969 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.727578 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-xflqc" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.744776 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.751513 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68vx\" (UniqueName: \"kubernetes.io/projected/7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf-kube-api-access-x68vx\") pod \"test-operator-controller-manager-7866795846-h547j\" (UID: \"7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf\") " pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.751681 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.751733 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csst5\" (UniqueName: \"kubernetes.io/projected/b64a3a78-cbf6-44ce-a7f2-7955af1d6e04-kube-api-access-csst5\") pod \"swift-operator-controller-manager-68f46476f-nv9kv\" (UID: \"b64a3a78-cbf6-44ce-a7f2-7955af1d6e04\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.751794 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4c5c\" (UniqueName: \"kubernetes.io/projected/549bee15-d4bb-43c2-af22-1bdbf4e66b78-kube-api-access-j4c5c\") pod \"telemetry-operator-controller-manager-7d4dd64c87-45rd7\" (UID: \"549bee15-d4bb-43c2-af22-1bdbf4e66b78\") " pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.757369 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.757417 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert podName:a745e178-a8a5-4f2b-b9bd-ad41a35f6140 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:45.757400668 +0000 UTC m=+983.576083963 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert") pod "infra-operator-controller-manager-79d975b745-s2t59" (UID: "a745e178-a8a5-4f2b-b9bd-ad41a35f6140") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.759339 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.769879 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4c5c\" (UniqueName: \"kubernetes.io/projected/549bee15-d4bb-43c2-af22-1bdbf4e66b78-kube-api-access-j4c5c\") pod \"telemetry-operator-controller-manager-7d4dd64c87-45rd7\" (UID: \"549bee15-d4bb-43c2-af22-1bdbf4e66b78\") " pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.778802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csst5\" (UniqueName: \"kubernetes.io/projected/b64a3a78-cbf6-44ce-a7f2-7955af1d6e04-kube-api-access-csst5\") pod \"swift-operator-controller-manager-68f46476f-nv9kv\" (UID: \"b64a3a78-cbf6-44ce-a7f2-7955af1d6e04\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.790253 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.791867 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.795043 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.795209 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.797036 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-g79nq" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.802762 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.810605 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.811895 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.813017 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.814540 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7jhhl" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.828483 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.856122 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.856468 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmzwt\" (UniqueName: \"kubernetes.io/projected/6cf33838-78c1-40de-9089-f68fbe14ea86-kube-api-access-mmzwt\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.856521 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.856568 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7khdg\" (UniqueName: \"kubernetes.io/projected/209db403-57f8-46b8-9ca3-0986c81dd9c0-kube-api-access-7khdg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-scwrg\" (UID: \"209db403-57f8-46b8-9ca3-0986c81dd9c0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.856618 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8lp\" (UniqueName: \"kubernetes.io/projected/55ee298b-d2cf-460f-b540-e748a09f81f0-kube-api-access-2d8lp\") pod \"watcher-operator-controller-manager-5db88f68c-bg92w\" (UID: \"55ee298b-d2cf-460f-b540-e748a09f81f0\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.856660 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x68vx\" (UniqueName: \"kubernetes.io/projected/7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf-kube-api-access-x68vx\") pod \"test-operator-controller-manager-7866795846-h547j\" (UID: \"7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf\") " pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.876909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x68vx\" (UniqueName: \"kubernetes.io/projected/7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf-kube-api-access-x68vx\") pod \"test-operator-controller-manager-7866795846-h547j\" (UID: \"7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf\") " pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.904964 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg"] Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.913451 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" Feb 16 21:12:44 crc kubenswrapper[4805]: W0216 21:12:44.917077 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bc499f8_3fb7_4e12_bb4c_1e903e0c4333.slice/crio-0bb2d9997be5115aef726e7e2ad26626143e1896400b912be97933cba6610bfe WatchSource:0}: Error finding container 0bb2d9997be5115aef726e7e2ad26626143e1896400b912be97933cba6610bfe: Status 404 returned error can't find the container with id 0bb2d9997be5115aef726e7e2ad26626143e1896400b912be97933cba6610bfe Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.964132 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmzwt\" (UniqueName: \"kubernetes.io/projected/6cf33838-78c1-40de-9089-f68fbe14ea86-kube-api-access-mmzwt\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.964232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.964296 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7khdg\" (UniqueName: \"kubernetes.io/projected/209db403-57f8-46b8-9ca3-0986c81dd9c0-kube-api-access-7khdg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-scwrg\" (UID: \"209db403-57f8-46b8-9ca3-0986c81dd9c0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.964408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8lp\" (UniqueName: \"kubernetes.io/projected/55ee298b-d2cf-460f-b540-e748a09f81f0-kube-api-access-2d8lp\") pod \"watcher-operator-controller-manager-5db88f68c-bg92w\" (UID: \"55ee298b-d2cf-460f-b540-e748a09f81f0\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.964471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.964679 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.964771 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:45.464752967 +0000 UTC m=+983.283436262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "webhook-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.964820 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: E0216 21:12:44.964878 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:45.46485525 +0000 UTC m=+983.283538545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "metrics-server-cert" not found Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.993316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmzwt\" (UniqueName: \"kubernetes.io/projected/6cf33838-78c1-40de-9089-f68fbe14ea86-kube-api-access-mmzwt\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:44 crc kubenswrapper[4805]: I0216 21:12:44.995038 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7khdg\" (UniqueName: \"kubernetes.io/projected/209db403-57f8-46b8-9ca3-0986c81dd9c0-kube-api-access-7khdg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-scwrg\" (UID: \"209db403-57f8-46b8-9ca3-0986c81dd9c0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.000566 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6"] Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.005194 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8lp\" (UniqueName: \"kubernetes.io/projected/55ee298b-d2cf-460f-b540-e748a09f81f0-kube-api-access-2d8lp\") pod \"watcher-operator-controller-manager-5db88f68c-bg92w\" (UID: \"55ee298b-d2cf-460f-b540-e748a09f81f0\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.006124 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.030970 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.054626 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.134134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" event={"ID":"e48823c7-c98e-447b-b539-1ce95bd2d3ba","Type":"ContainerStarted","Data":"503e0de2f46df7d2b32c6e0fd0ea4cd4b1f23a7011828d51413540fd199a35ed"} Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.135597 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" event={"ID":"5bc499f8-3fb7-4e12-bb4c-1e903e0c4333","Type":"ContainerStarted","Data":"0bb2d9997be5115aef726e7e2ad26626143e1896400b912be97933cba6610bfe"} Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.142143 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.174645 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:45 crc kubenswrapper[4805]: E0216 21:12:45.174937 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:45 crc kubenswrapper[4805]: E0216 21:12:45.174984 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert podName:32fb9648-24e5-4073-902e-f76ea1eaa512 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:46.174969084 +0000 UTC m=+983.993652379 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" (UID: "32fb9648-24e5-4073-902e-f76ea1eaa512") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.412654 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs"] Feb 16 21:12:45 crc kubenswrapper[4805]: W0216 21:12:45.421835 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b0862c2_4070_4639_94cc_c29e08f49bf1.slice/crio-d83b7f1a2eb14e60afac6c7523fdf1dfcac1aa33257f74a9832997c434fd45b4 WatchSource:0}: Error finding container d83b7f1a2eb14e60afac6c7523fdf1dfcac1aa33257f74a9832997c434fd45b4: Status 404 returned error can't find the container with id d83b7f1a2eb14e60afac6c7523fdf1dfcac1aa33257f74a9832997c434fd45b4 Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.468296 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-77f85"] Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.478801 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.478924 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:45 crc kubenswrapper[4805]: E0216 21:12:45.479102 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:12:45 crc kubenswrapper[4805]: E0216 21:12:45.479151 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:46.479137192 +0000 UTC m=+984.297820487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "webhook-server-cert" not found Feb 16 21:12:45 crc kubenswrapper[4805]: E0216 21:12:45.479223 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:12:45 crc kubenswrapper[4805]: E0216 21:12:45.479288 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:46.479266955 +0000 UTC m=+984.297950250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "metrics-server-cert" not found Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.530791 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-djjn2"] Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.657393 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth"] Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.783959 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:45 crc kubenswrapper[4805]: E0216 21:12:45.784243 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:45 crc kubenswrapper[4805]: E0216 21:12:45.784295 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert podName:a745e178-a8a5-4f2b-b9bd-ad41a35f6140 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:47.784280827 +0000 UTC m=+985.602964122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert") pod "infra-operator-controller-manager-79d975b745-s2t59" (UID: "a745e178-a8a5-4f2b-b9bd-ad41a35f6140") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.798666 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2"] Feb 16 21:12:45 crc kubenswrapper[4805]: I0216 21:12:45.818181 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc"] Feb 16 21:12:45 crc kubenswrapper[4805]: W0216 21:12:45.820727 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod431105e4_6dfd_4644_ae7a_521284b98eda.slice/crio-f36d35dd4d076fa26880cb9d9048f5f6cccbd2067b15a77148e1eb2b6dfea047 WatchSource:0}: Error finding container f36d35dd4d076fa26880cb9d9048f5f6cccbd2067b15a77148e1eb2b6dfea047: Status 404 returned error can't find the container with id f36d35dd4d076fa26880cb9d9048f5f6cccbd2067b15a77148e1eb2b6dfea047 Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.145634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" event={"ID":"f2b71132-ee94-4b2a-ad19-ab9dde9013ef","Type":"ContainerStarted","Data":"ad93f2e1b4f03512ea47ec6d89224de09f5f40adae09624a6366a09dcb7b1cd4"} Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.146975 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" event={"ID":"5b0862c2-4070-4639-94cc-c29e08f49bf1","Type":"ContainerStarted","Data":"d83b7f1a2eb14e60afac6c7523fdf1dfcac1aa33257f74a9832997c434fd45b4"} Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.148274 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" event={"ID":"431105e4-6dfd-4644-ae7a-521284b98eda","Type":"ContainerStarted","Data":"f36d35dd4d076fa26880cb9d9048f5f6cccbd2067b15a77148e1eb2b6dfea047"} Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.150076 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" event={"ID":"856ddca1-f396-432a-b33a-9fa0c1611e29","Type":"ContainerStarted","Data":"bdb6255f64c4dfec3a17dd4e74a731720a5b79e4b2d5046ad349e721e578eecb"} Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.154539 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" event={"ID":"e38af58d-0049-4d9c-a834-ca048c0b171f","Type":"ContainerStarted","Data":"5a053d52169f03ad230215fc5ccde64234f51a513628cc47bd6e59b35bfe442d"} Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.157555 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" event={"ID":"21218328-0794-4bb6-aa02-2bb8fa48f6b9","Type":"ContainerStarted","Data":"97dd53e2e36b69107bfe4e5c46c396914a3b8b58236f6d146e3b825ad9d9523b"} Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.195955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.196177 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.196221 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert podName:32fb9648-24e5-4073-902e-f76ea1eaa512 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:48.19620765 +0000 UTC m=+986.014890945 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" (UID: "32fb9648-24e5-4073-902e-f76ea1eaa512") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.499698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.499847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.499866 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.499939 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:48.499921436 +0000 UTC m=+986.318604731 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "metrics-server-cert" not found Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.499997 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.500044 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:48.500031559 +0000 UTC m=+986.318714844 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "webhook-server-cert" not found Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.521666 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2"] Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.575851 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb"] Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.598026 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75"] Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.612154 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn"] Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.664679 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv"] Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.681799 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj"] Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.687783 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7"] Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.696900 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w"] Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.700796 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb"] Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.729390 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs"] Feb 16 21:12:46 crc kubenswrapper[4805]: W0216 21:12:46.736802 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c325ae7_03f7_4d40_a4b2_9fd7a10b98bf.slice/crio-37c078849569cb99d5fc5424a276191287d1a2546c69c1cbe7e6d52ccfe173da WatchSource:0}: Error finding container 37c078849569cb99d5fc5424a276191287d1a2546c69c1cbe7e6d52ccfe173da: Status 404 returned error can't find the container with id 37c078849569cb99d5fc5424a276191287d1a2546c69c1cbe7e6d52ccfe173da Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.744454 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x68vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-h547j_openstack-operators(7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.745530 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" podUID="7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf" Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.753336 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7khdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-scwrg_openstack-operators(209db403-57f8-46b8-9ca3-0986c81dd9c0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.753396 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-h547j"] Feb 16 21:12:46 crc kubenswrapper[4805]: E0216 21:12:46.754697 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" podUID="209db403-57f8-46b8-9ca3-0986c81dd9c0" Feb 16 21:12:46 crc kubenswrapper[4805]: I0216 21:12:46.769446 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg"] Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.210059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" event={"ID":"745993b9-7ebe-405b-9242-a561ed40c3a7","Type":"ContainerStarted","Data":"ddbafd498fcfe7b6b72ff534540dd988d60b1ba7ebe0df0637e0114705c9b493"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.219340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" event={"ID":"7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf","Type":"ContainerStarted","Data":"37c078849569cb99d5fc5424a276191287d1a2546c69c1cbe7e6d52ccfe173da"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.220903 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" event={"ID":"b64a3a78-cbf6-44ce-a7f2-7955af1d6e04","Type":"ContainerStarted","Data":"429953b583279104975a49ba27cfa942a1bab8f983b1f44d271482dcd0bf66a6"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.226668 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" event={"ID":"856ad725-988a-44f6-8cb1-57ff2498e192","Type":"ContainerStarted","Data":"82d7aa5a33876678c3775ce533606a7ac74fc09128c0a162e95ea4071fc2bb60"} Feb 16 21:12:47 crc kubenswrapper[4805]: E0216 21:12:47.229155 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" podUID="7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf" Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.234959 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" event={"ID":"549bee15-d4bb-43c2-af22-1bdbf4e66b78","Type":"ContainerStarted","Data":"a840d7bd59d93d7f4d0333ddde2cd7d43f81f1369dc8120d71bf0607bd10de8c"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.239141 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" event={"ID":"6bb4da12-019d-4101-a5eb-e0c85421d029","Type":"ContainerStarted","Data":"62821908cc1ef52d5f10a8560f6c851c6f9fcc55db7d5ba51940ebc365d173c8"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.240588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" event={"ID":"f43b76e3-db2c-40f4-80fa-77ed9f196cf5","Type":"ContainerStarted","Data":"2ad7ac77ded278667992e10b046da8ecd86fab849b6b6db52d5f1dc56d7a3fb9"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.247135 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" event={"ID":"209db403-57f8-46b8-9ca3-0986c81dd9c0","Type":"ContainerStarted","Data":"83d60bc0f1184ed465026a304778a1f0cd881914892e79782c4208b244ce6ab4"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.249652 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" event={"ID":"3205b9a4-589f-4200-9e47-a073f38397c1","Type":"ContainerStarted","Data":"665d326e85e76f030d89803815fba08fbd277d59a6ddd24212d80881e61a0cff"} Feb 16 21:12:47 crc kubenswrapper[4805]: E0216 21:12:47.249996 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" podUID="209db403-57f8-46b8-9ca3-0986c81dd9c0" Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.251188 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" event={"ID":"2840ffe3-d3c1-4faf-bb32-f9c17173713f","Type":"ContainerStarted","Data":"d58c69519e353cd39734e9aafff53db61b750df5a662d6df4b0ce73934c49952"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.258052 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" event={"ID":"55ee298b-d2cf-460f-b540-e748a09f81f0","Type":"ContainerStarted","Data":"0df05acce6befd3921df271e5f308e306d89f771b026cf3da36a0125d93d5741"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.265949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" event={"ID":"8131c3df-9b2e-48f7-95c9-95a8d5ba9f69","Type":"ContainerStarted","Data":"986708e3cb9707c700349e98cd9cda0fdeb2af2ec13c309bcf39604645a8b667"} Feb 16 21:12:47 crc kubenswrapper[4805]: I0216 21:12:47.845570 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:47 crc kubenswrapper[4805]: E0216 21:12:47.845967 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:47 crc kubenswrapper[4805]: E0216 21:12:47.846024 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert podName:a745e178-a8a5-4f2b-b9bd-ad41a35f6140 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:51.84600663 +0000 UTC m=+989.664689925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert") pod "infra-operator-controller-manager-79d975b745-s2t59" (UID: "a745e178-a8a5-4f2b-b9bd-ad41a35f6140") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:48 crc kubenswrapper[4805]: I0216 21:12:48.254740 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:48 crc kubenswrapper[4805]: E0216 21:12:48.254945 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:48 crc kubenswrapper[4805]: E0216 21:12:48.255033 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert podName:32fb9648-24e5-4073-902e-f76ea1eaa512 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:52.255007395 +0000 UTC m=+990.073690680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" (UID: "32fb9648-24e5-4073-902e-f76ea1eaa512") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:48 crc kubenswrapper[4805]: E0216 21:12:48.277505 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" podUID="7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf" Feb 16 21:12:48 crc kubenswrapper[4805]: E0216 21:12:48.284773 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" podUID="209db403-57f8-46b8-9ca3-0986c81dd9c0" Feb 16 21:12:48 crc kubenswrapper[4805]: I0216 21:12:48.561187 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:48 crc kubenswrapper[4805]: E0216 21:12:48.561318 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:12:48 crc kubenswrapper[4805]: E0216 21:12:48.561435 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:52.561417144 +0000 UTC m=+990.380100439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "metrics-server-cert" not found Feb 16 21:12:48 crc kubenswrapper[4805]: I0216 21:12:48.561466 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:48 crc kubenswrapper[4805]: E0216 21:12:48.561855 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:12:48 crc kubenswrapper[4805]: E0216 21:12:48.561916 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:52.561901517 +0000 UTC m=+990.380584812 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "webhook-server-cert" not found Feb 16 21:12:51 crc kubenswrapper[4805]: I0216 21:12:51.927909 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:12:51 crc kubenswrapper[4805]: E0216 21:12:51.928153 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:51 crc kubenswrapper[4805]: E0216 21:12:51.929498 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert podName:a745e178-a8a5-4f2b-b9bd-ad41a35f6140 nodeName:}" failed. No retries permitted until 2026-02-16 21:12:59.929478487 +0000 UTC m=+997.748161792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert") pod "infra-operator-controller-manager-79d975b745-s2t59" (UID: "a745e178-a8a5-4f2b-b9bd-ad41a35f6140") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:12:52 crc kubenswrapper[4805]: I0216 21:12:52.335837 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:12:52 crc kubenswrapper[4805]: E0216 21:12:52.336813 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:52 crc kubenswrapper[4805]: E0216 21:12:52.336968 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert podName:32fb9648-24e5-4073-902e-f76ea1eaa512 nodeName:}" failed. No retries permitted until 2026-02-16 21:13:00.33694949 +0000 UTC m=+998.155632795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" (UID: "32fb9648-24e5-4073-902e-f76ea1eaa512") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:12:52 crc kubenswrapper[4805]: I0216 21:12:52.641380 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:52 crc kubenswrapper[4805]: I0216 21:12:52.641576 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:12:52 crc kubenswrapper[4805]: E0216 21:12:52.641591 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:12:52 crc kubenswrapper[4805]: E0216 21:12:52.641679 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:13:00.641655524 +0000 UTC m=+998.460338889 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "webhook-server-cert" not found Feb 16 21:12:52 crc kubenswrapper[4805]: E0216 21:12:52.641916 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:12:52 crc kubenswrapper[4805]: E0216 21:12:52.642035 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs podName:6cf33838-78c1-40de-9089-f68fbe14ea86 nodeName:}" failed. No retries permitted until 2026-02-16 21:13:00.642007543 +0000 UTC m=+998.460690878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs") pod "openstack-operator-controller-manager-86b9cf86d-ntqc8" (UID: "6cf33838-78c1-40de-9089-f68fbe14ea86") : secret "metrics-server-cert" not found Feb 16 21:12:59 crc kubenswrapper[4805]: E0216 21:12:59.038614 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 16 21:12:59 crc kubenswrapper[4805]: E0216 21:12:59.039325 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7s2hn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-djjn2_openstack-operators(21218328-0794-4bb6-aa02-2bb8fa48f6b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:12:59 crc kubenswrapper[4805]: E0216 21:12:59.040877 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" podUID="21218328-0794-4bb6-aa02-2bb8fa48f6b9" Feb 16 21:12:59 crc kubenswrapper[4805]: E0216 21:12:59.381374 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" podUID="21218328-0794-4bb6-aa02-2bb8fa48f6b9" Feb 16 21:13:00 crc kubenswrapper[4805]: I0216 21:13:00.340180 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:13:00 crc kubenswrapper[4805]: I0216 21:13:00.340263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:13:00 crc kubenswrapper[4805]: E0216 21:13:00.340572 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:13:00 crc kubenswrapper[4805]: E0216 21:13:00.340638 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert podName:a745e178-a8a5-4f2b-b9bd-ad41a35f6140 nodeName:}" failed. No retries permitted until 2026-02-16 21:13:16.340620326 +0000 UTC m=+1014.159303621 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert") pod "infra-operator-controller-manager-79d975b745-s2t59" (UID: "a745e178-a8a5-4f2b-b9bd-ad41a35f6140") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:13:00 crc kubenswrapper[4805]: I0216 21:13:00.346274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32fb9648-24e5-4073-902e-f76ea1eaa512-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4\" (UID: \"32fb9648-24e5-4073-902e-f76ea1eaa512\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:13:00 crc kubenswrapper[4805]: I0216 21:13:00.384451 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:13:00 crc kubenswrapper[4805]: I0216 21:13:00.644460 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:13:00 crc kubenswrapper[4805]: I0216 21:13:00.644883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:13:00 crc kubenswrapper[4805]: I0216 21:13:00.649869 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-webhook-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:13:00 crc kubenswrapper[4805]: I0216 21:13:00.649898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cf33838-78c1-40de-9089-f68fbe14ea86-metrics-certs\") pod \"openstack-operator-controller-manager-86b9cf86d-ntqc8\" (UID: \"6cf33838-78c1-40de-9089-f68fbe14ea86\") " pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:13:00 crc kubenswrapper[4805]: I0216 21:13:00.710142 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:13:03 crc kubenswrapper[4805]: E0216 21:13:03.439324 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 16 21:13:03 crc kubenswrapper[4805]: E0216 21:13:03.439949 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cbsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-mvwth_openstack-operators(856ddca1-f396-432a-b33a-9fa0c1611e29): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:03 crc kubenswrapper[4805]: E0216 21:13:03.441245 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" podUID="856ddca1-f396-432a-b33a-9fa0c1611e29" Feb 16 21:13:04 crc kubenswrapper[4805]: E0216 21:13:04.145033 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 16 21:13:04 crc kubenswrapper[4805]: E0216 21:13:04.145812 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjjvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-dm6f6_openstack-operators(e48823c7-c98e-447b-b539-1ce95bd2d3ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:04 crc kubenswrapper[4805]: E0216 21:13:04.147211 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" podUID="e48823c7-c98e-447b-b539-1ce95bd2d3ba" Feb 16 21:13:04 crc kubenswrapper[4805]: E0216 21:13:04.425758 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" podUID="856ddca1-f396-432a-b33a-9fa0c1611e29" Feb 16 21:13:04 crc kubenswrapper[4805]: E0216 21:13:04.426477 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" podUID="e48823c7-c98e-447b-b539-1ce95bd2d3ba" Feb 16 21:13:05 crc kubenswrapper[4805]: E0216 21:13:05.740699 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 16 21:13:05 crc kubenswrapper[4805]: E0216 21:13:05.741191 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zm47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-4xgqc_openstack-operators(431105e4-6dfd-4644-ae7a-521284b98eda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:05 crc kubenswrapper[4805]: E0216 21:13:05.742466 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" podUID="431105e4-6dfd-4644-ae7a-521284b98eda" Feb 16 21:13:06 crc kubenswrapper[4805]: E0216 21:13:06.263413 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 16 21:13:06 crc kubenswrapper[4805]: E0216 21:13:06.263642 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xn4t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-rwjb2_openstack-operators(e38af58d-0049-4d9c-a834-ca048c0b171f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:06 crc kubenswrapper[4805]: E0216 21:13:06.264908 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" podUID="e38af58d-0049-4d9c-a834-ca048c0b171f" Feb 16 21:13:06 crc kubenswrapper[4805]: E0216 21:13:06.446264 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" podUID="431105e4-6dfd-4644-ae7a-521284b98eda" Feb 16 21:13:06 crc kubenswrapper[4805]: E0216 21:13:06.446448 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" podUID="e38af58d-0049-4d9c-a834-ca048c0b171f" Feb 16 21:13:08 crc kubenswrapper[4805]: E0216 21:13:08.852702 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 21:13:08 crc kubenswrapper[4805]: E0216 21:13:08.855133 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6gn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-lzgrn_openstack-operators(8131c3df-9b2e-48f7-95c9-95a8d5ba9f69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:08 crc kubenswrapper[4805]: E0216 21:13:08.856392 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" podUID="8131c3df-9b2e-48f7-95c9-95a8d5ba9f69" Feb 16 21:13:09 crc kubenswrapper[4805]: E0216 21:13:09.477978 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" podUID="8131c3df-9b2e-48f7-95c9-95a8d5ba9f69" Feb 16 21:13:09 crc kubenswrapper[4805]: E0216 21:13:09.618395 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 16 21:13:09 crc kubenswrapper[4805]: E0216 21:13:09.618638 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bmxff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-fsr75_openstack-operators(3205b9a4-589f-4200-9e47-a073f38397c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:09 crc kubenswrapper[4805]: E0216 21:13:09.620151 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" podUID="3205b9a4-589f-4200-9e47-a073f38397c1" Feb 16 21:13:10 crc kubenswrapper[4805]: E0216 21:13:10.116754 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 16 21:13:10 crc kubenswrapper[4805]: E0216 21:13:10.116991 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-csst5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-nv9kv_openstack-operators(b64a3a78-cbf6-44ce-a7f2-7955af1d6e04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:10 crc kubenswrapper[4805]: E0216 21:13:10.118750 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" podUID="b64a3a78-cbf6-44ce-a7f2-7955af1d6e04" Feb 16 21:13:10 crc kubenswrapper[4805]: E0216 21:13:10.487473 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" podUID="b64a3a78-cbf6-44ce-a7f2-7955af1d6e04" Feb 16 21:13:10 crc kubenswrapper[4805]: E0216 21:13:10.488481 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" podUID="3205b9a4-589f-4200-9e47-a073f38397c1" Feb 16 21:13:11 crc kubenswrapper[4805]: E0216 21:13:11.557238 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 16 21:13:11 crc kubenswrapper[4805]: E0216 21:13:11.557511 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zw6hj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-66xx2_openstack-operators(f43b76e3-db2c-40f4-80fa-77ed9f196cf5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:11 crc kubenswrapper[4805]: E0216 21:13:11.558662 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" podUID="f43b76e3-db2c-40f4-80fa-77ed9f196cf5" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.106989 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.107408 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dp64b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-2mzjb_openstack-operators(2840ffe3-d3c1-4faf-bb32-f9c17173713f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.108662 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" podUID="2840ffe3-d3c1-4faf-bb32-f9c17173713f" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.513365 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" podUID="2840ffe3-d3c1-4faf-bb32-f9c17173713f" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.513590 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" podUID="f43b76e3-db2c-40f4-80fa-77ed9f196cf5" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.716907 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.717122 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2d8lp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-bg92w_openstack-operators(55ee298b-d2cf-460f-b540-e748a09f81f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.718292 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" podUID="55ee298b-d2cf-460f-b540-e748a09f81f0" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.786706 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.13:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.786791 4805 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.13:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.786968 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.13:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4c5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7d4dd64c87-45rd7_openstack-operators(549bee15-d4bb-43c2-af22-1bdbf4e66b78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:12 crc kubenswrapper[4805]: E0216 21:13:12.788132 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" podUID="549bee15-d4bb-43c2-af22-1bdbf4e66b78" Feb 16 21:13:13 crc kubenswrapper[4805]: E0216 21:13:13.516355 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.13:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" podUID="549bee15-d4bb-43c2-af22-1bdbf4e66b78" Feb 16 21:13:13 crc kubenswrapper[4805]: E0216 21:13:13.516400 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" podUID="55ee298b-d2cf-460f-b540-e748a09f81f0" Feb 16 21:13:16 crc kubenswrapper[4805]: I0216 21:13:16.395511 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:13:16 crc kubenswrapper[4805]: I0216 21:13:16.404261 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a745e178-a8a5-4f2b-b9bd-ad41a35f6140-cert\") pod \"infra-operator-controller-manager-79d975b745-s2t59\" (UID: \"a745e178-a8a5-4f2b-b9bd-ad41a35f6140\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:13:16 crc kubenswrapper[4805]: I0216 21:13:16.470574 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:13:17 crc kubenswrapper[4805]: E0216 21:13:17.747639 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 21:13:17 crc kubenswrapper[4805]: E0216 21:13:17.748154 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cbjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-dghtj_openstack-operators(6bb4da12-019d-4101-a5eb-e0c85421d029): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:17 crc kubenswrapper[4805]: E0216 21:13:17.749379 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" podUID="6bb4da12-019d-4101-a5eb-e0c85421d029" Feb 16 21:13:18 crc kubenswrapper[4805]: E0216 21:13:18.237825 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 16 21:13:18 crc kubenswrapper[4805]: E0216 21:13:18.238060 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7khdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-scwrg_openstack-operators(209db403-57f8-46b8-9ca3-0986c81dd9c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:13:18 crc kubenswrapper[4805]: E0216 21:13:18.239216 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" podUID="209db403-57f8-46b8-9ca3-0986c81dd9c0" Feb 16 21:13:18 crc kubenswrapper[4805]: I0216 21:13:18.565603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" event={"ID":"856ad725-988a-44f6-8cb1-57ff2498e192","Type":"ContainerStarted","Data":"579225c54169d81f2f03b57702ef0d3ea55819428539c5870850176cbc1d008d"} Feb 16 21:13:18 crc kubenswrapper[4805]: I0216 21:13:18.566161 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" Feb 16 21:13:18 crc kubenswrapper[4805]: E0216 21:13:18.571224 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" podUID="6bb4da12-019d-4101-a5eb-e0c85421d029" Feb 16 21:13:18 crc kubenswrapper[4805]: I0216 21:13:18.595180 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" podStartSLOduration=7.741273026 podStartE2EDuration="34.595165091s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.729013344 +0000 UTC m=+984.547696639" lastFinishedPulling="2026-02-16 21:13:13.582905409 +0000 UTC m=+1011.401588704" observedRunningTime="2026-02-16 21:13:18.591941294 +0000 UTC m=+1016.410624599" watchObservedRunningTime="2026-02-16 21:13:18.595165091 +0000 UTC m=+1016.413848386" Feb 16 21:13:18 crc kubenswrapper[4805]: I0216 21:13:18.829566 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4"] Feb 16 21:13:19 crc kubenswrapper[4805]: W0216 21:13:19.098983 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda745e178_a8a5_4f2b_b9bd_ad41a35f6140.slice/crio-31bd8f03a03051520fecfab48a9baaf210358698e8ebb7c470b95fc45bc3d666 WatchSource:0}: Error finding container 31bd8f03a03051520fecfab48a9baaf210358698e8ebb7c470b95fc45bc3d666: Status 404 returned error can't find the container with id 31bd8f03a03051520fecfab48a9baaf210358698e8ebb7c470b95fc45bc3d666 Feb 16 21:13:19 crc kubenswrapper[4805]: W0216 21:13:19.102528 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cf33838_78c1_40de_9089_f68fbe14ea86.slice/crio-8fa8ffd445cdc137346fa31ad44853ebd0d4f56d67a861fbd656dbc8eb7658b5 WatchSource:0}: Error finding container 8fa8ffd445cdc137346fa31ad44853ebd0d4f56d67a861fbd656dbc8eb7658b5: Status 404 returned error can't find the container with id 8fa8ffd445cdc137346fa31ad44853ebd0d4f56d67a861fbd656dbc8eb7658b5 Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.115625 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8"] Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.154702 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-s2t59"] Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.574792 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" event={"ID":"e48823c7-c98e-447b-b539-1ce95bd2d3ba","Type":"ContainerStarted","Data":"c5c9403fa47a9116003a985cb91db8a9ad5398bcd9a9d35e2e0aafebc382905d"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.575190 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.576008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" event={"ID":"a745e178-a8a5-4f2b-b9bd-ad41a35f6140","Type":"ContainerStarted","Data":"31bd8f03a03051520fecfab48a9baaf210358698e8ebb7c470b95fc45bc3d666"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.577610 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" event={"ID":"21218328-0794-4bb6-aa02-2bb8fa48f6b9","Type":"ContainerStarted","Data":"7402cd0fd5a09c202928c98a5a911bffbdb9ff7fa27d789900ad13d11c88eeec"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.577838 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.579036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" event={"ID":"7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf","Type":"ContainerStarted","Data":"5a310bdc5bd8644b47f6b4ba16f148620f680a503f90cb010b6194549c148275"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.579206 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.580309 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" event={"ID":"32fb9648-24e5-4073-902e-f76ea1eaa512","Type":"ContainerStarted","Data":"b5d557732c1311afa37ce5c8f7a3cb186688b6f974d622f6c8ce16b14da50d59"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.581357 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" event={"ID":"6cf33838-78c1-40de-9089-f68fbe14ea86","Type":"ContainerStarted","Data":"28f77ad27fa5050d15e4e057f75787aabea854a7a695076fba174df8447325ea"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.581380 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" event={"ID":"6cf33838-78c1-40de-9089-f68fbe14ea86","Type":"ContainerStarted","Data":"8fa8ffd445cdc137346fa31ad44853ebd0d4f56d67a861fbd656dbc8eb7658b5"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.582131 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.583308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" event={"ID":"e38af58d-0049-4d9c-a834-ca048c0b171f","Type":"ContainerStarted","Data":"3de33be8e1f1f88fba1c19a10bdeb4f922afce37009af800e41ae9cd3d114e81"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.583695 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.588927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" event={"ID":"5bc499f8-3fb7-4e12-bb4c-1e903e0c4333","Type":"ContainerStarted","Data":"7585b464833a0307a965f025e51eff4ee0042f4a36ea0e5d6ebdc35b28b0921f"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.589528 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.590460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" event={"ID":"f2b71132-ee94-4b2a-ad19-ab9dde9013ef","Type":"ContainerStarted","Data":"afc27b318364d85163663991f4f780f329f0c88da073c4d0a76c6dc9928a9b5c"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.590862 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.593964 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" event={"ID":"745993b9-7ebe-405b-9242-a561ed40c3a7","Type":"ContainerStarted","Data":"ecb8f859257368112af504e954cad36a68906939be9bcbcadb5b9b33f61ce34b"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.594575 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.599360 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" podStartSLOduration=2.591003447 podStartE2EDuration="36.599345306s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:45.060952119 +0000 UTC m=+982.879635414" lastFinishedPulling="2026-02-16 21:13:19.069293978 +0000 UTC m=+1016.887977273" observedRunningTime="2026-02-16 21:13:19.592786409 +0000 UTC m=+1017.411469704" watchObservedRunningTime="2026-02-16 21:13:19.599345306 +0000 UTC m=+1017.418028601" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.617810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" event={"ID":"5b0862c2-4070-4639-94cc-c29e08f49bf1","Type":"ContainerStarted","Data":"94c53b9f760061eace417dce93d1aae8c62f708d04d16a760d126c51ead42aa9"} Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.618229 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.627645 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" podStartSLOduration=3.900849841 podStartE2EDuration="36.627629891s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:45.52417919 +0000 UTC m=+983.342862485" lastFinishedPulling="2026-02-16 21:13:18.25095923 +0000 UTC m=+1016.069642535" observedRunningTime="2026-02-16 21:13:19.621713862 +0000 UTC m=+1017.440397157" watchObservedRunningTime="2026-02-16 21:13:19.627629891 +0000 UTC m=+1017.446313186" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.661956 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" podStartSLOduration=8.902330243 podStartE2EDuration="36.6619432s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:44.926843591 +0000 UTC m=+982.745526886" lastFinishedPulling="2026-02-16 21:13:12.686456548 +0000 UTC m=+1010.505139843" observedRunningTime="2026-02-16 21:13:19.660779669 +0000 UTC m=+1017.479462964" watchObservedRunningTime="2026-02-16 21:13:19.6619432 +0000 UTC m=+1017.480626495" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.702170 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" podStartSLOduration=35.702149997 podStartE2EDuration="35.702149997s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:13:19.698121489 +0000 UTC m=+1017.516804784" watchObservedRunningTime="2026-02-16 21:13:19.702149997 +0000 UTC m=+1017.520833292" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.747225 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" podStartSLOduration=4.242353034 podStartE2EDuration="35.747204417s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.744321658 +0000 UTC m=+984.563004953" lastFinishedPulling="2026-02-16 21:13:18.249173041 +0000 UTC m=+1016.067856336" observedRunningTime="2026-02-16 21:13:19.722876608 +0000 UTC m=+1017.541559903" watchObservedRunningTime="2026-02-16 21:13:19.747204417 +0000 UTC m=+1017.565887712" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.754491 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" podStartSLOduration=3.186135438 podStartE2EDuration="36.754473983s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:45.799968211 +0000 UTC m=+983.618651506" lastFinishedPulling="2026-02-16 21:13:19.368306756 +0000 UTC m=+1017.186990051" observedRunningTime="2026-02-16 21:13:19.754004791 +0000 UTC m=+1017.572688086" watchObservedRunningTime="2026-02-16 21:13:19.754473983 +0000 UTC m=+1017.573157278" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.789882 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" podStartSLOduration=8.695189009 podStartE2EDuration="36.78986316s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:45.488232698 +0000 UTC m=+983.306915993" lastFinishedPulling="2026-02-16 21:13:13.582906849 +0000 UTC m=+1011.401590144" observedRunningTime="2026-02-16 21:13:19.782664886 +0000 UTC m=+1017.601348181" watchObservedRunningTime="2026-02-16 21:13:19.78986316 +0000 UTC m=+1017.608546455" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.803866 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" podStartSLOduration=8.950864979 podStartE2EDuration="35.803847029s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.729267631 +0000 UTC m=+984.547950936" lastFinishedPulling="2026-02-16 21:13:13.582249691 +0000 UTC m=+1011.400932986" observedRunningTime="2026-02-16 21:13:19.795641196 +0000 UTC m=+1017.614324491" watchObservedRunningTime="2026-02-16 21:13:19.803847029 +0000 UTC m=+1017.622530324" Feb 16 21:13:19 crc kubenswrapper[4805]: I0216 21:13:19.831186 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" podStartSLOduration=8.672145827 podStartE2EDuration="36.831164968s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:45.423889468 +0000 UTC m=+983.242572763" lastFinishedPulling="2026-02-16 21:13:13.582908609 +0000 UTC m=+1011.401591904" observedRunningTime="2026-02-16 21:13:19.818178426 +0000 UTC m=+1017.636861711" watchObservedRunningTime="2026-02-16 21:13:19.831164968 +0000 UTC m=+1017.649848263" Feb 16 21:13:20 crc kubenswrapper[4805]: I0216 21:13:20.628398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" event={"ID":"856ddca1-f396-432a-b33a-9fa0c1611e29","Type":"ContainerStarted","Data":"f2ff0ab67f54783b6e6a5a8430ec37dc9281db7d94693f1c90b6cd98ed15fe57"} Feb 16 21:13:20 crc kubenswrapper[4805]: I0216 21:13:20.661035 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" podStartSLOduration=3.183949748 podStartE2EDuration="37.661017857s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:45.665154604 +0000 UTC m=+983.483837899" lastFinishedPulling="2026-02-16 21:13:20.142222723 +0000 UTC m=+1017.960906008" observedRunningTime="2026-02-16 21:13:20.653493584 +0000 UTC m=+1018.472176879" watchObservedRunningTime="2026-02-16 21:13:20.661017857 +0000 UTC m=+1018.479701152" Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.668072 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" event={"ID":"a745e178-a8a5-4f2b-b9bd-ad41a35f6140","Type":"ContainerStarted","Data":"35cb0b852b58d4a464aa5a0860aa70c19aad6b6ce6700c3d87f53c0fc5ab51f6"} Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.669084 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.670278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" event={"ID":"32fb9648-24e5-4073-902e-f76ea1eaa512","Type":"ContainerStarted","Data":"d9fadcc418fae8375f7ad52bf81c3ca774dcb2b3c5c9072f6a7f91a876f00b95"} Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.671231 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.689223 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" event={"ID":"431105e4-6dfd-4644-ae7a-521284b98eda","Type":"ContainerStarted","Data":"3521cfa016d344cacad1ad9bad1c8d2125d2a774084ad9870f71302aedeea757"} Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.690404 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.693073 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" event={"ID":"3205b9a4-589f-4200-9e47-a073f38397c1","Type":"ContainerStarted","Data":"2bf67331ffcf3cf5a8b7e91a9372797b14d7cf018e3234fda6bb9823565d936f"} Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.693425 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.717943 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" podStartSLOduration=37.181569028 podStartE2EDuration="40.717914853s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:13:19.101849468 +0000 UTC m=+1016.920532763" lastFinishedPulling="2026-02-16 21:13:22.638195253 +0000 UTC m=+1020.456878588" observedRunningTime="2026-02-16 21:13:23.701497279 +0000 UTC m=+1021.520180594" watchObservedRunningTime="2026-02-16 21:13:23.717914853 +0000 UTC m=+1021.536598148" Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.736365 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" podStartSLOduration=35.973338264 podStartE2EDuration="39.736347772s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:13:18.876001128 +0000 UTC m=+1016.694684423" lastFinishedPulling="2026-02-16 21:13:22.639010586 +0000 UTC m=+1020.457693931" observedRunningTime="2026-02-16 21:13:23.736154437 +0000 UTC m=+1021.554837732" watchObservedRunningTime="2026-02-16 21:13:23.736347772 +0000 UTC m=+1021.555031067" Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.761251 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" podStartSLOduration=3.9447210889999997 podStartE2EDuration="40.761234615s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:45.823243421 +0000 UTC m=+983.641926716" lastFinishedPulling="2026-02-16 21:13:22.639756927 +0000 UTC m=+1020.458440242" observedRunningTime="2026-02-16 21:13:23.760035783 +0000 UTC m=+1021.578719078" watchObservedRunningTime="2026-02-16 21:13:23.761234615 +0000 UTC m=+1021.579917910" Feb 16 21:13:23 crc kubenswrapper[4805]: I0216 21:13:23.775653 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" podStartSLOduration=3.281059927 podStartE2EDuration="39.775634774s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.602034268 +0000 UTC m=+984.420717553" lastFinishedPulling="2026-02-16 21:13:23.096609105 +0000 UTC m=+1020.915292400" observedRunningTime="2026-02-16 21:13:23.770223008 +0000 UTC m=+1021.588906323" watchObservedRunningTime="2026-02-16 21:13:23.775634774 +0000 UTC m=+1021.594318069" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.170637 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dm6f6" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.178899 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cdzwg" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.185680 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jtxhs" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.207097 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-djjn2" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.268905 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-77f85" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.294851 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rwjb2" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.626489 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.701870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" event={"ID":"8131c3df-9b2e-48f7-95c9-95a8d5ba9f69","Type":"ContainerStarted","Data":"53f84780d16a95c0058408c7a5863fff53afbfd67b00c1acf1d2279ae9d086bf"} Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.703507 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.723734 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" podStartSLOduration=3.201656669 podStartE2EDuration="40.723702661s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.605357008 +0000 UTC m=+984.424040303" lastFinishedPulling="2026-02-16 21:13:24.12740299 +0000 UTC m=+1021.946086295" observedRunningTime="2026-02-16 21:13:24.721332037 +0000 UTC m=+1022.540015332" watchObservedRunningTime="2026-02-16 21:13:24.723702661 +0000 UTC m=+1022.542385956" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.725417 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kt9zs" Feb 16 21:13:24 crc kubenswrapper[4805]: I0216 21:13:24.818231 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s54qb" Feb 16 21:13:25 crc kubenswrapper[4805]: I0216 21:13:25.037817 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-h547j" Feb 16 21:13:26 crc kubenswrapper[4805]: I0216 21:13:26.720677 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" event={"ID":"b64a3a78-cbf6-44ce-a7f2-7955af1d6e04","Type":"ContainerStarted","Data":"c32d4e8279fcd5d1721a9c03c56039d46a400329979b4d4e4c7f0c7f45c0746a"} Feb 16 21:13:26 crc kubenswrapper[4805]: I0216 21:13:26.722398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" event={"ID":"f43b76e3-db2c-40f4-80fa-77ed9f196cf5","Type":"ContainerStarted","Data":"0e08d31b9c56badb460c1349eb759b5dcfe45d7365cb19cc1ae03b3810173ef2"} Feb 16 21:13:26 crc kubenswrapper[4805]: I0216 21:13:26.722745 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" Feb 16 21:13:26 crc kubenswrapper[4805]: I0216 21:13:26.723864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" event={"ID":"55ee298b-d2cf-460f-b540-e748a09f81f0","Type":"ContainerStarted","Data":"03c71a5e3c49d73244f2afbb0d881398a9f4919afe5404cb238a4c476089b7b0"} Feb 16 21:13:26 crc kubenswrapper[4805]: I0216 21:13:26.725044 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" event={"ID":"549bee15-d4bb-43c2-af22-1bdbf4e66b78","Type":"ContainerStarted","Data":"6a755b9d11c7d9213e7e7f1f1783c069a060dca282b800302870e3bc8763f06a"} Feb 16 21:13:26 crc kubenswrapper[4805]: I0216 21:13:26.725229 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" Feb 16 21:13:26 crc kubenswrapper[4805]: I0216 21:13:26.753672 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" podStartSLOduration=4.247741847 podStartE2EDuration="43.753650666s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.559602531 +0000 UTC m=+984.378285826" lastFinishedPulling="2026-02-16 21:13:26.06551135 +0000 UTC m=+1023.884194645" observedRunningTime="2026-02-16 21:13:26.748024364 +0000 UTC m=+1024.566707659" watchObservedRunningTime="2026-02-16 21:13:26.753650666 +0000 UTC m=+1024.572333961" Feb 16 21:13:26 crc kubenswrapper[4805]: I0216 21:13:26.769112 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" podStartSLOduration=3.843179705 podStartE2EDuration="42.769096324s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.734536333 +0000 UTC m=+984.553219628" lastFinishedPulling="2026-02-16 21:13:25.660452922 +0000 UTC m=+1023.479136247" observedRunningTime="2026-02-16 21:13:26.766199396 +0000 UTC m=+1024.584882691" watchObservedRunningTime="2026-02-16 21:13:26.769096324 +0000 UTC m=+1024.587779609" Feb 16 21:13:27 crc kubenswrapper[4805]: I0216 21:13:27.754388 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" podStartSLOduration=4.46113914 podStartE2EDuration="43.754370357s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.729453105 +0000 UTC m=+984.548136400" lastFinishedPulling="2026-02-16 21:13:26.022684332 +0000 UTC m=+1023.841367617" observedRunningTime="2026-02-16 21:13:27.751621154 +0000 UTC m=+1025.570304449" watchObservedRunningTime="2026-02-16 21:13:27.754370357 +0000 UTC m=+1025.573053652" Feb 16 21:13:27 crc kubenswrapper[4805]: I0216 21:13:27.775543 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" podStartSLOduration=4.248859698 podStartE2EDuration="43.77552722s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.602179232 +0000 UTC m=+984.420862527" lastFinishedPulling="2026-02-16 21:13:26.128846754 +0000 UTC m=+1023.947530049" observedRunningTime="2026-02-16 21:13:27.770993947 +0000 UTC m=+1025.589677232" watchObservedRunningTime="2026-02-16 21:13:27.77552722 +0000 UTC m=+1025.594210515" Feb 16 21:13:28 crc kubenswrapper[4805]: I0216 21:13:28.747940 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" event={"ID":"2840ffe3-d3c1-4faf-bb32-f9c17173713f","Type":"ContainerStarted","Data":"d7efa9ecc652d21e376206784a8edebfecc421b0d7cf82e5eb03753e003080c2"} Feb 16 21:13:28 crc kubenswrapper[4805]: I0216 21:13:28.749603 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" Feb 16 21:13:28 crc kubenswrapper[4805]: I0216 21:13:28.773310 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" podStartSLOduration=4.342937201 podStartE2EDuration="45.773288731s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.602169842 +0000 UTC m=+984.420853137" lastFinishedPulling="2026-02-16 21:13:28.032521372 +0000 UTC m=+1025.851204667" observedRunningTime="2026-02-16 21:13:28.76990124 +0000 UTC m=+1026.588584565" watchObservedRunningTime="2026-02-16 21:13:28.773288731 +0000 UTC m=+1026.591972036" Feb 16 21:13:30 crc kubenswrapper[4805]: I0216 21:13:30.394218 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4" Feb 16 21:13:30 crc kubenswrapper[4805]: I0216 21:13:30.717042 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86b9cf86d-ntqc8" Feb 16 21:13:32 crc kubenswrapper[4805]: E0216 21:13:32.599643 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" podUID="209db403-57f8-46b8-9ca3-0986c81dd9c0" Feb 16 21:13:33 crc kubenswrapper[4805]: I0216 21:13:33.813278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" event={"ID":"6bb4da12-019d-4101-a5eb-e0c85421d029","Type":"ContainerStarted","Data":"66e62903a8bd8a3850941cff98d7a9cd07a86258f965075cfa12fed0d81db145"} Feb 16 21:13:33 crc kubenswrapper[4805]: I0216 21:13:33.813979 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" Feb 16 21:13:33 crc kubenswrapper[4805]: I0216 21:13:33.844626 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" podStartSLOduration=4.43084486 podStartE2EDuration="50.844611001s" podCreationTimestamp="2026-02-16 21:12:43 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.613021936 +0000 UTC m=+984.431705231" lastFinishedPulling="2026-02-16 21:13:33.026788087 +0000 UTC m=+1030.845471372" observedRunningTime="2026-02-16 21:13:33.842352821 +0000 UTC m=+1031.661036136" watchObservedRunningTime="2026-02-16 21:13:33.844611001 +0000 UTC m=+1031.663294296" Feb 16 21:13:34 crc kubenswrapper[4805]: I0216 21:13:34.390419 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4xgqc" Feb 16 21:13:34 crc kubenswrapper[4805]: I0216 21:13:34.604846 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-2mzjb" Feb 16 21:13:34 crc kubenswrapper[4805]: I0216 21:13:34.628554 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mvwth" Feb 16 21:13:34 crc kubenswrapper[4805]: I0216 21:13:34.677276 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-66xx2" Feb 16 21:13:34 crc kubenswrapper[4805]: I0216 21:13:34.685334 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lzgrn" Feb 16 21:13:34 crc kubenswrapper[4805]: I0216 21:13:34.761502 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-fsr75" Feb 16 21:13:34 crc kubenswrapper[4805]: I0216 21:13:34.914569 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" Feb 16 21:13:34 crc kubenswrapper[4805]: I0216 21:13:34.917988 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-nv9kv" Feb 16 21:13:35 crc kubenswrapper[4805]: I0216 21:13:35.009863 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7d4dd64c87-45rd7" Feb 16 21:13:35 crc kubenswrapper[4805]: I0216 21:13:35.055417 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" Feb 16 21:13:35 crc kubenswrapper[4805]: I0216 21:13:35.058597 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-bg92w" Feb 16 21:13:36 crc kubenswrapper[4805]: I0216 21:13:36.481033 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s2t59" Feb 16 21:13:44 crc kubenswrapper[4805]: I0216 21:13:44.616437 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dghtj" Feb 16 21:13:46 crc kubenswrapper[4805]: I0216 21:13:46.937420 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" event={"ID":"209db403-57f8-46b8-9ca3-0986c81dd9c0","Type":"ContainerStarted","Data":"0572bef46986c4896735ad5925b9da5c3b0b0745d4a8deec8892398dcd103a29"} Feb 16 21:13:46 crc kubenswrapper[4805]: I0216 21:13:46.961953 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-scwrg" podStartSLOduration=3.663627253 podStartE2EDuration="1m2.961924296s" podCreationTimestamp="2026-02-16 21:12:44 +0000 UTC" firstStartedPulling="2026-02-16 21:12:46.753240199 +0000 UTC m=+984.571923494" lastFinishedPulling="2026-02-16 21:13:46.051537212 +0000 UTC m=+1043.870220537" observedRunningTime="2026-02-16 21:13:46.959055248 +0000 UTC m=+1044.777738553" watchObservedRunningTime="2026-02-16 21:13:46.961924296 +0000 UTC m=+1044.780607621" Feb 16 21:14:03 crc kubenswrapper[4805]: I0216 21:14:03.971633 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sd5gx"] Feb 16 21:14:03 crc kubenswrapper[4805]: I0216 21:14:03.975116 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:03 crc kubenswrapper[4805]: I0216 21:14:03.977474 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-45l7w" Feb 16 21:14:03 crc kubenswrapper[4805]: I0216 21:14:03.977716 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 21:14:03 crc kubenswrapper[4805]: I0216 21:14:03.977871 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 21:14:03 crc kubenswrapper[4805]: I0216 21:14:03.977960 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 21:14:03 crc kubenswrapper[4805]: I0216 21:14:03.989312 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sd5gx"] Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.035991 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mpts2"] Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.039270 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb059a-29e0-4433-a40c-9750353a0f18-config\") pod \"dnsmasq-dns-675f4bcbfc-sd5gx\" (UID: \"97bb059a-29e0-4433-a40c-9750353a0f18\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.039330 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79wm\" (UniqueName: \"kubernetes.io/projected/97bb059a-29e0-4433-a40c-9750353a0f18-kube-api-access-s79wm\") pod \"dnsmasq-dns-675f4bcbfc-sd5gx\" (UID: \"97bb059a-29e0-4433-a40c-9750353a0f18\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.040181 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.043878 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.045523 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mpts2"] Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.141106 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftkkp\" (UniqueName: \"kubernetes.io/projected/1111a442-f485-494e-9f39-cc197d623c31-kube-api-access-ftkkp\") pod \"dnsmasq-dns-78dd6ddcc-mpts2\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.141338 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-config\") pod \"dnsmasq-dns-78dd6ddcc-mpts2\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.141389 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb059a-29e0-4433-a40c-9750353a0f18-config\") pod \"dnsmasq-dns-675f4bcbfc-sd5gx\" (UID: \"97bb059a-29e0-4433-a40c-9750353a0f18\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.141439 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mpts2\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.141547 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s79wm\" (UniqueName: \"kubernetes.io/projected/97bb059a-29e0-4433-a40c-9750353a0f18-kube-api-access-s79wm\") pod \"dnsmasq-dns-675f4bcbfc-sd5gx\" (UID: \"97bb059a-29e0-4433-a40c-9750353a0f18\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.142321 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb059a-29e0-4433-a40c-9750353a0f18-config\") pod \"dnsmasq-dns-675f4bcbfc-sd5gx\" (UID: \"97bb059a-29e0-4433-a40c-9750353a0f18\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.163392 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s79wm\" (UniqueName: \"kubernetes.io/projected/97bb059a-29e0-4433-a40c-9750353a0f18-kube-api-access-s79wm\") pod \"dnsmasq-dns-675f4bcbfc-sd5gx\" (UID: \"97bb059a-29e0-4433-a40c-9750353a0f18\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.242768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftkkp\" (UniqueName: \"kubernetes.io/projected/1111a442-f485-494e-9f39-cc197d623c31-kube-api-access-ftkkp\") pod \"dnsmasq-dns-78dd6ddcc-mpts2\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.242839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-config\") pod \"dnsmasq-dns-78dd6ddcc-mpts2\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.242862 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mpts2\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.244473 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-config\") pod \"dnsmasq-dns-78dd6ddcc-mpts2\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.244647 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mpts2\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.263693 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftkkp\" (UniqueName: \"kubernetes.io/projected/1111a442-f485-494e-9f39-cc197d623c31-kube-api-access-ftkkp\") pod \"dnsmasq-dns-78dd6ddcc-mpts2\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.299874 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.377148 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.809315 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sd5gx"] Feb 16 21:14:04 crc kubenswrapper[4805]: I0216 21:14:04.881771 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mpts2"] Feb 16 21:14:05 crc kubenswrapper[4805]: I0216 21:14:05.102051 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" event={"ID":"1111a442-f485-494e-9f39-cc197d623c31","Type":"ContainerStarted","Data":"b53feb7e2171df9a64f435487a903552a8606e6053277fa616c20c7f18d426a2"} Feb 16 21:14:05 crc kubenswrapper[4805]: I0216 21:14:05.103351 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" event={"ID":"97bb059a-29e0-4433-a40c-9750353a0f18","Type":"ContainerStarted","Data":"59767a3a470b05603f0792425a7901d98eb88b66ff97fcc56ec44362c4828fee"} Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.722090 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sd5gx"] Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.761965 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9j79x"] Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.764036 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.781118 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9j79x"] Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.800534 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9j79x\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.801165 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j25zs\" (UniqueName: \"kubernetes.io/projected/2a328213-f410-43cc-8dd7-51a427a4d7c3-kube-api-access-j25zs\") pod \"dnsmasq-dns-666b6646f7-9j79x\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.801510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-config\") pod \"dnsmasq-dns-666b6646f7-9j79x\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.904669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9j79x\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.905036 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j25zs\" (UniqueName: \"kubernetes.io/projected/2a328213-f410-43cc-8dd7-51a427a4d7c3-kube-api-access-j25zs\") pod \"dnsmasq-dns-666b6646f7-9j79x\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.905079 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-config\") pod \"dnsmasq-dns-666b6646f7-9j79x\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.905996 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-config\") pod \"dnsmasq-dns-666b6646f7-9j79x\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.906519 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9j79x\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:06 crc kubenswrapper[4805]: I0216 21:14:06.939263 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j25zs\" (UniqueName: \"kubernetes.io/projected/2a328213-f410-43cc-8dd7-51a427a4d7c3-kube-api-access-j25zs\") pod \"dnsmasq-dns-666b6646f7-9j79x\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.074940 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mpts2"] Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.093573 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kxcsp"] Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.095048 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.095993 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.106557 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kxcsp"] Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.217566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdnb\" (UniqueName: \"kubernetes.io/projected/e1b6981b-31c5-4bde-a3a3-b76721a723a7-kube-api-access-fzdnb\") pod \"dnsmasq-dns-57d769cc4f-kxcsp\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.217662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-config\") pod \"dnsmasq-dns-57d769cc4f-kxcsp\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.217763 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kxcsp\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.318940 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kxcsp\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.319398 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzdnb\" (UniqueName: \"kubernetes.io/projected/e1b6981b-31c5-4bde-a3a3-b76721a723a7-kube-api-access-fzdnb\") pod \"dnsmasq-dns-57d769cc4f-kxcsp\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.319436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-config\") pod \"dnsmasq-dns-57d769cc4f-kxcsp\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.321477 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kxcsp\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.321548 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-config\") pod \"dnsmasq-dns-57d769cc4f-kxcsp\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.379855 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzdnb\" (UniqueName: \"kubernetes.io/projected/e1b6981b-31c5-4bde-a3a3-b76721a723a7-kube-api-access-fzdnb\") pod \"dnsmasq-dns-57d769cc4f-kxcsp\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.424007 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:07 crc kubenswrapper[4805]: W0216 21:14:07.858608 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a328213_f410_43cc_8dd7_51a427a4d7c3.slice/crio-c6aa92b9f441f4fc7fcf9162f38974a96d0a8ff9006644bcd7947494988cd2c8 WatchSource:0}: Error finding container c6aa92b9f441f4fc7fcf9162f38974a96d0a8ff9006644bcd7947494988cd2c8: Status 404 returned error can't find the container with id c6aa92b9f441f4fc7fcf9162f38974a96d0a8ff9006644bcd7947494988cd2c8 Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.859552 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9j79x"] Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.923930 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.925398 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.929593 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.929835 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9prqv" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.929963 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.930083 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.930217 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.930328 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.930434 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.931429 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.938035 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.939535 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.944820 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.950865 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.973311 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:14:07 crc kubenswrapper[4805]: I0216 21:14:07.983897 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.004885 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kxcsp"] Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjkdn\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-kube-api-access-sjkdn\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14fe6c77-adbd-4abe-9aff-7bb72474d47b-pod-info\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038622 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038639 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-server-conf\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038660 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038678 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a48053f-4668-43af-bda4-7af014d6457d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038693 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a48053f-4668-43af-bda4-7af014d6457d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038797 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038814 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038868 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-config-data\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038883 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038903 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038926 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038953 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14fe6c77-adbd-4abe-9aff-7bb72474d47b-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038973 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.038992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039018 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95a93760-333e-4689-a64c-c3534a04cec0-pod-info\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039039 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qwrv\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-kube-api-access-6qwrv\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039068 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g7hm\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-kube-api-access-7g7hm\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039091 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-server-conf\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039119 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039133 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039156 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039170 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039192 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-config-data\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039210 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95a93760-333e-4689-a64c-c3534a04cec0-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.039234 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.101092 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.101156 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.141605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142538 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142567 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-config-data\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142607 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142625 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142665 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14fe6c77-adbd-4abe-9aff-7bb72474d47b-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142696 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142715 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95a93760-333e-4689-a64c-c3534a04cec0-pod-info\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142746 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qwrv\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-kube-api-access-6qwrv\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142772 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g7hm\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-kube-api-access-7g7hm\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142790 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-server-conf\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142834 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142861 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-config-data\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142923 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95a93760-333e-4689-a64c-c3534a04cec0-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142953 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.142993 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjkdn\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-kube-api-access-sjkdn\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.143022 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14fe6c77-adbd-4abe-9aff-7bb72474d47b-pod-info\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.143043 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.143073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.143089 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-server-conf\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.143110 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.143125 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a48053f-4668-43af-bda4-7af014d6457d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.143140 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.143155 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.143175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a48053f-4668-43af-bda4-7af014d6457d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.145360 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.146259 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.147536 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-server-conf\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.148035 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.149454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.149677 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.150212 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.150380 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.150830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.150950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.151000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.151572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a48053f-4668-43af-bda4-7af014d6457d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.152254 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-config-data\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.152610 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-config-data\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.155979 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.162053 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.162092 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/914431e2210613cdb42ee45d8789399625ced1e6ffb709fe1b4811c9831d39c2/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.162771 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.162789 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a0b12b61bc88df910923ee6bb97fdac79f3a6f1e948ce57348fec3710da23f47/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.163537 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-server-conf\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.165092 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14fe6c77-adbd-4abe-9aff-7bb72474d47b-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.165669 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.165706 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/74bc4188232fbdfbfb5c6ded44af5120efb3ac2bd3e07d5261392b9eea692a72/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.172586 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14fe6c77-adbd-4abe-9aff-7bb72474d47b-pod-info\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.172941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.175161 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.177072 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.179467 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95a93760-333e-4689-a64c-c3534a04cec0-pod-info\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.184062 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95a93760-333e-4689-a64c-c3534a04cec0-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.184439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a48053f-4668-43af-bda4-7af014d6457d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.184647 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.185592 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjkdn\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-kube-api-access-sjkdn\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.188616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g7hm\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-kube-api-access-7g7hm\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.195855 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.210368 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qwrv\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-kube-api-access-6qwrv\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.216113 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.224358 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") pod \"rabbitmq-server-2\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.227114 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.229157 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" event={"ID":"e1b6981b-31c5-4bde-a3a3-b76721a723a7","Type":"ContainerStarted","Data":"94fa57a7f34c95b0537236630cf7d260f7589961b0d7dacea7a50aedcc079993"} Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.233144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9j79x" event={"ID":"2a328213-f410-43cc-8dd7-51a427a4d7c3","Type":"ContainerStarted","Data":"c6aa92b9f441f4fc7fcf9162f38974a96d0a8ff9006644bcd7947494988cd2c8"} Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.265782 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") pod \"rabbitmq-server-0\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.272556 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.283159 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.293024 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.293933 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.294053 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.294089 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.294142 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.295079 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.295193 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-84zrc" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.304287 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.350697 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f897110-86a6-4edb-a453-a1322e0a580f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.350843 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.350866 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28cdd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-kube-api-access-28cdd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.350931 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.350987 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.351009 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.351035 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f897110-86a6-4edb-a453-a1322e0a580f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.351050 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.351076 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.351113 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.351132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.355555 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.365142 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.381118 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453200 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453312 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453378 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f897110-86a6-4edb-a453-a1322e0a580f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453399 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453447 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453550 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f897110-86a6-4edb-a453-a1322e0a580f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.453605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28cdd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-kube-api-access-28cdd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.455620 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.455965 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.456303 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.457104 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.457113 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.457268 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.459515 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.461311 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.461341 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9531023197cf390bfc7490105fc42faf100fe54366bbc898152126d1b095ba49/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.462691 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f897110-86a6-4edb-a453-a1322e0a580f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.469824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f897110-86a6-4edb-a453-a1322e0a580f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.471497 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28cdd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-kube-api-access-28cdd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.504022 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:08 crc kubenswrapper[4805]: I0216 21:14:08.624224 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.306280 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.329741 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.350561 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.353069 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.354651 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.355779 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-kd74n" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.356013 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.359868 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.364373 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.383695 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.400754 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:14:09 crc kubenswrapper[4805]: W0216 21:14:09.431907 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a48053f_4668_43af_bda4_7af014d6457d.slice/crio-4bc327eb457e5f09d4cda6f26c72c097c9d7092a8c253a36de3c4e37d8afc9fa WatchSource:0}: Error finding container 4bc327eb457e5f09d4cda6f26c72c097c9d7092a8c253a36de3c4e37d8afc9fa: Status 404 returned error can't find the container with id 4bc327eb457e5f09d4cda6f26c72c097c9d7092a8c253a36de3c4e37d8afc9fa Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.471220 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:14:09 crc kubenswrapper[4805]: W0216 21:14:09.472416 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f897110_86a6_4edb_a453_a1322e0a580f.slice/crio-679eb0636c8fc2c8a2b4614dc2dcf42c831b8b1f85ea0f796c1682dcb087edbf WatchSource:0}: Error finding container 679eb0636c8fc2c8a2b4614dc2dcf42c831b8b1f85ea0f796c1682dcb087edbf: Status 404 returned error can't find the container with id 679eb0636c8fc2c8a2b4614dc2dcf42c831b8b1f85ea0f796c1682dcb087edbf Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.492298 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.492356 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.492408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a26e2638-0e9c-4ad8-b45f-fca49ab85641\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a26e2638-0e9c-4ad8-b45f-fca49ab85641\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.492434 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.492454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-config-data-default\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.492485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhbk5\" (UniqueName: \"kubernetes.io/projected/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-kube-api-access-xhbk5\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.492901 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-kolla-config\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.492992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.595101 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.595143 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-config-data-default\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.595162 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhbk5\" (UniqueName: \"kubernetes.io/projected/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-kube-api-access-xhbk5\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.595231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-kolla-config\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.595257 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.595298 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.595332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.595376 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a26e2638-0e9c-4ad8-b45f-fca49ab85641\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a26e2638-0e9c-4ad8-b45f-fca49ab85641\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.595567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.596028 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-config-data-default\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.597238 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.603531 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-kolla-config\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.604309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.605166 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.605969 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.606076 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a26e2638-0e9c-4ad8-b45f-fca49ab85641\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a26e2638-0e9c-4ad8-b45f-fca49ab85641\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/533ddb0c919ebfc94e04f1fa208cda9455d277e2a223e4fbfc58008ddb238fd0/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.619687 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhbk5\" (UniqueName: \"kubernetes.io/projected/8b9deffe-ab6a-46d4-a463-9ed81e6f3889-kube-api-access-xhbk5\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.695739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a26e2638-0e9c-4ad8-b45f-fca49ab85641\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a26e2638-0e9c-4ad8-b45f-fca49ab85641\") pod \"openstack-galera-0\" (UID: \"8b9deffe-ab6a-46d4-a463-9ed81e6f3889\") " pod="openstack/openstack-galera-0" Feb 16 21:14:09 crc kubenswrapper[4805]: I0216 21:14:09.817502 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.255164 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"95a93760-333e-4689-a64c-c3534a04cec0","Type":"ContainerStarted","Data":"60a07e3d570d94a4a2c8167f2c76d38beb0cd98ac1679556fc0348cef0903c86"} Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.257261 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14fe6c77-adbd-4abe-9aff-7bb72474d47b","Type":"ContainerStarted","Data":"9d5c212a85d7beb85f387eb6fd0bd7d9784be2fddc27338346de322c628a1b2a"} Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.259342 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f897110-86a6-4edb-a453-a1322e0a580f","Type":"ContainerStarted","Data":"679eb0636c8fc2c8a2b4614dc2dcf42c831b8b1f85ea0f796c1682dcb087edbf"} Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.279068 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a48053f-4668-43af-bda4-7af014d6457d","Type":"ContainerStarted","Data":"4bc327eb457e5f09d4cda6f26c72c097c9d7092a8c253a36de3c4e37d8afc9fa"} Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.386739 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:14:10 crc kubenswrapper[4805]: W0216 21:14:10.402783 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b9deffe_ab6a_46d4_a463_9ed81e6f3889.slice/crio-76a1a4600229f17e66bc19ed832ab615bb5fa58b28780f90a658d5539a2a573c WatchSource:0}: Error finding container 76a1a4600229f17e66bc19ed832ab615bb5fa58b28780f90a658d5539a2a573c: Status 404 returned error can't find the container with id 76a1a4600229f17e66bc19ed832ab615bb5fa58b28780f90a658d5539a2a573c Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.656992 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.659804 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.663807 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.664023 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.666275 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.666911 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-k9vq9" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.698300 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.727121 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/26f1c84d-9566-4135-a24a-ce299c76a102-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.727217 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f1c84d-9566-4135-a24a-ce299c76a102-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.727239 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f1c84d-9566-4135-a24a-ce299c76a102-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.727309 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/26f1c84d-9566-4135-a24a-ce299c76a102-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.727340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsnr\" (UniqueName: \"kubernetes.io/projected/26f1c84d-9566-4135-a24a-ce299c76a102-kube-api-access-ddsnr\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.727388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5c08c45b-438d-40f6-91ec-4bcee4824142\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c08c45b-438d-40f6-91ec-4bcee4824142\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.727413 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/26f1c84d-9566-4135-a24a-ce299c76a102-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.727437 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f1c84d-9566-4135-a24a-ce299c76a102-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.829632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/26f1c84d-9566-4135-a24a-ce299c76a102-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.829750 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsnr\" (UniqueName: \"kubernetes.io/projected/26f1c84d-9566-4135-a24a-ce299c76a102-kube-api-access-ddsnr\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.829812 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5c08c45b-438d-40f6-91ec-4bcee4824142\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c08c45b-438d-40f6-91ec-4bcee4824142\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.829844 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/26f1c84d-9566-4135-a24a-ce299c76a102-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.829873 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f1c84d-9566-4135-a24a-ce299c76a102-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.829986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/26f1c84d-9566-4135-a24a-ce299c76a102-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.830050 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f1c84d-9566-4135-a24a-ce299c76a102-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.830077 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f1c84d-9566-4135-a24a-ce299c76a102-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.832683 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/26f1c84d-9566-4135-a24a-ce299c76a102-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.833531 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/26f1c84d-9566-4135-a24a-ce299c76a102-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.837983 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.838024 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5c08c45b-438d-40f6-91ec-4bcee4824142\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c08c45b-438d-40f6-91ec-4bcee4824142\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8db2cff8f3815b7b5004b9429cd60913eed4118fdd988796c0df0ccc497fb53/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.838555 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f1c84d-9566-4135-a24a-ce299c76a102-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.840977 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f1c84d-9566-4135-a24a-ce299c76a102-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.842608 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/26f1c84d-9566-4135-a24a-ce299c76a102-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.854229 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f1c84d-9566-4135-a24a-ce299c76a102-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.856177 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsnr\" (UniqueName: \"kubernetes.io/projected/26f1c84d-9566-4135-a24a-ce299c76a102-kube-api-access-ddsnr\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.908989 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5c08c45b-438d-40f6-91ec-4bcee4824142\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c08c45b-438d-40f6-91ec-4bcee4824142\") pod \"openstack-cell1-galera-0\" (UID: \"26f1c84d-9566-4135-a24a-ce299c76a102\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.972741 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.974211 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.977358 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fbrkz" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.977874 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.978142 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.989192 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:10 crc kubenswrapper[4805]: I0216 21:14:10.995602 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.035321 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63c31b7f-0d91-4d04-87b2-2f85a7baf260-memcached-tls-certs\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.035361 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4vl\" (UniqueName: \"kubernetes.io/projected/63c31b7f-0d91-4d04-87b2-2f85a7baf260-kube-api-access-gr4vl\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.037973 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c31b7f-0d91-4d04-87b2-2f85a7baf260-combined-ca-bundle\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.038418 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63c31b7f-0d91-4d04-87b2-2f85a7baf260-config-data\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.038669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63c31b7f-0d91-4d04-87b2-2f85a7baf260-kolla-config\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.140116 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63c31b7f-0d91-4d04-87b2-2f85a7baf260-memcached-tls-certs\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.140555 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr4vl\" (UniqueName: \"kubernetes.io/projected/63c31b7f-0d91-4d04-87b2-2f85a7baf260-kube-api-access-gr4vl\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.140626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c31b7f-0d91-4d04-87b2-2f85a7baf260-combined-ca-bundle\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.140669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63c31b7f-0d91-4d04-87b2-2f85a7baf260-config-data\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.140757 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63c31b7f-0d91-4d04-87b2-2f85a7baf260-kolla-config\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.141672 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63c31b7f-0d91-4d04-87b2-2f85a7baf260-kolla-config\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.142857 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63c31b7f-0d91-4d04-87b2-2f85a7baf260-config-data\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.149139 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c31b7f-0d91-4d04-87b2-2f85a7baf260-combined-ca-bundle\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.155422 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63c31b7f-0d91-4d04-87b2-2f85a7baf260-memcached-tls-certs\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.160292 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr4vl\" (UniqueName: \"kubernetes.io/projected/63c31b7f-0d91-4d04-87b2-2f85a7baf260-kube-api-access-gr4vl\") pod \"memcached-0\" (UID: \"63c31b7f-0d91-4d04-87b2-2f85a7baf260\") " pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.308783 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.308932 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8b9deffe-ab6a-46d4-a463-9ed81e6f3889","Type":"ContainerStarted","Data":"76a1a4600229f17e66bc19ed832ab615bb5fa58b28780f90a658d5539a2a573c"} Feb 16 21:14:11 crc kubenswrapper[4805]: I0216 21:14:11.965030 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:14:12 crc kubenswrapper[4805]: W0216 21:14:12.053810 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26f1c84d_9566_4135_a24a_ce299c76a102.slice/crio-06c0d88997dc3cf21edb5bb9776962983e8640fda3695ca928af21c4d56f7b92 WatchSource:0}: Error finding container 06c0d88997dc3cf21edb5bb9776962983e8640fda3695ca928af21c4d56f7b92: Status 404 returned error can't find the container with id 06c0d88997dc3cf21edb5bb9776962983e8640fda3695ca928af21c4d56f7b92 Feb 16 21:14:12 crc kubenswrapper[4805]: I0216 21:14:12.179238 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 21:14:12 crc kubenswrapper[4805]: W0216 21:14:12.203812 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63c31b7f_0d91_4d04_87b2_2f85a7baf260.slice/crio-26e25c2216bf063e68a8fcb5b595a59861865e9f2f8353d7f808bcc61f47d14b WatchSource:0}: Error finding container 26e25c2216bf063e68a8fcb5b595a59861865e9f2f8353d7f808bcc61f47d14b: Status 404 returned error can't find the container with id 26e25c2216bf063e68a8fcb5b595a59861865e9f2f8353d7f808bcc61f47d14b Feb 16 21:14:12 crc kubenswrapper[4805]: I0216 21:14:12.330628 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26f1c84d-9566-4135-a24a-ce299c76a102","Type":"ContainerStarted","Data":"06c0d88997dc3cf21edb5bb9776962983e8640fda3695ca928af21c4d56f7b92"} Feb 16 21:14:12 crc kubenswrapper[4805]: I0216 21:14:12.339526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"63c31b7f-0d91-4d04-87b2-2f85a7baf260","Type":"ContainerStarted","Data":"26e25c2216bf063e68a8fcb5b595a59861865e9f2f8353d7f808bcc61f47d14b"} Feb 16 21:14:13 crc kubenswrapper[4805]: I0216 21:14:13.528978 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:14:13 crc kubenswrapper[4805]: I0216 21:14:13.535294 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:14:13 crc kubenswrapper[4805]: I0216 21:14:13.552549 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-csmtk" Feb 16 21:14:13 crc kubenswrapper[4805]: I0216 21:14:13.589366 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:14:13 crc kubenswrapper[4805]: I0216 21:14:13.630820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wc57\" (UniqueName: \"kubernetes.io/projected/9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd-kube-api-access-2wc57\") pod \"kube-state-metrics-0\" (UID: \"9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd\") " pod="openstack/kube-state-metrics-0" Feb 16 21:14:13 crc kubenswrapper[4805]: I0216 21:14:13.733109 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wc57\" (UniqueName: \"kubernetes.io/projected/9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd-kube-api-access-2wc57\") pod \"kube-state-metrics-0\" (UID: \"9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd\") " pod="openstack/kube-state-metrics-0" Feb 16 21:14:13 crc kubenswrapper[4805]: I0216 21:14:13.773499 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wc57\" (UniqueName: \"kubernetes.io/projected/9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd-kube-api-access-2wc57\") pod \"kube-state-metrics-0\" (UID: \"9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd\") " pod="openstack/kube-state-metrics-0" Feb 16 21:14:13 crc kubenswrapper[4805]: I0216 21:14:13.889947 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.540749 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk"] Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.542113 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.544907 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-2cplw" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.556164 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.561441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdw6z\" (UniqueName: \"kubernetes.io/projected/e83ff69c-bdd9-42c7-9404-eb267edb67b5-kube-api-access-wdw6z\") pod \"observability-ui-dashboards-66cbf594b5-98vxk\" (UID: \"e83ff69c-bdd9-42c7-9404-eb267edb67b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.561501 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e83ff69c-bdd9-42c7-9404-eb267edb67b5-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-98vxk\" (UID: \"e83ff69c-bdd9-42c7-9404-eb267edb67b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.578321 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk"] Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.663132 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdw6z\" (UniqueName: \"kubernetes.io/projected/e83ff69c-bdd9-42c7-9404-eb267edb67b5-kube-api-access-wdw6z\") pod \"observability-ui-dashboards-66cbf594b5-98vxk\" (UID: \"e83ff69c-bdd9-42c7-9404-eb267edb67b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.663202 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e83ff69c-bdd9-42c7-9404-eb267edb67b5-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-98vxk\" (UID: \"e83ff69c-bdd9-42c7-9404-eb267edb67b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" Feb 16 21:14:14 crc kubenswrapper[4805]: E0216 21:14:14.664869 4805 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 16 21:14:14 crc kubenswrapper[4805]: E0216 21:14:14.664915 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e83ff69c-bdd9-42c7-9404-eb267edb67b5-serving-cert podName:e83ff69c-bdd9-42c7-9404-eb267edb67b5 nodeName:}" failed. No retries permitted until 2026-02-16 21:14:15.164901188 +0000 UTC m=+1072.983584483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e83ff69c-bdd9-42c7-9404-eb267edb67b5-serving-cert") pod "observability-ui-dashboards-66cbf594b5-98vxk" (UID: "e83ff69c-bdd9-42c7-9404-eb267edb67b5") : secret "observability-ui-dashboards" not found Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.704034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdw6z\" (UniqueName: \"kubernetes.io/projected/e83ff69c-bdd9-42c7-9404-eb267edb67b5-kube-api-access-wdw6z\") pod \"observability-ui-dashboards-66cbf594b5-98vxk\" (UID: \"e83ff69c-bdd9-42c7-9404-eb267edb67b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.729947 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.809714 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.814093 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.819307 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-tx9qq" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.820324 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.828026 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.828589 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.828704 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.828941 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.829902 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.859666 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.868643 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.983996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.984292 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.984415 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.984493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.984569 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.984670 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn5rh\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-kube-api-access-nn5rh\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.984783 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.984873 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.984957 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.985060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.985501 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7ffb44489c-m6dch"] Feb 16 21:14:14 crc kubenswrapper[4805]: I0216 21:14:14.987008 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.016199 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7ffb44489c-m6dch"] Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086178 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086241 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn5rh\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-kube-api-access-nn5rh\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086294 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086352 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086379 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5hvt\" (UniqueName: \"kubernetes.io/projected/962b8225-d957-46fa-bbde-052b0a0f8441-kube-api-access-p5hvt\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086403 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/962b8225-d957-46fa-bbde-052b0a0f8441-console-serving-cert\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086422 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086488 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-console-config\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086506 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-oauth-serving-cert\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086548 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/962b8225-d957-46fa-bbde-052b0a0f8441-console-oauth-config\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086584 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086608 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-service-ca\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086634 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.086675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-trusted-ca-bundle\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.087234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.087413 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.087826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.091205 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.091237 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/820435a9e07a10f19b33f7de556745380338e31e769cf4b46fae642a65ea8517/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.092868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.093563 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.100341 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.103308 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.108585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.135374 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn5rh\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-kube-api-access-nn5rh\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.184148 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") pod \"prometheus-metric-storage-0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.188421 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5hvt\" (UniqueName: \"kubernetes.io/projected/962b8225-d957-46fa-bbde-052b0a0f8441-kube-api-access-p5hvt\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.188472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/962b8225-d957-46fa-bbde-052b0a0f8441-console-serving-cert\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.188525 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-console-config\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.188545 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-oauth-serving-cert\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.188569 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e83ff69c-bdd9-42c7-9404-eb267edb67b5-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-98vxk\" (UID: \"e83ff69c-bdd9-42c7-9404-eb267edb67b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.188593 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/962b8225-d957-46fa-bbde-052b0a0f8441-console-oauth-config\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.188636 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-service-ca\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.188673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-trusted-ca-bundle\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.189830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-trusted-ca-bundle\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.192636 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-service-ca\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.192799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-console-config\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.193230 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/962b8225-d957-46fa-bbde-052b0a0f8441-oauth-serving-cert\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.193808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/962b8225-d957-46fa-bbde-052b0a0f8441-console-serving-cert\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.195122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e83ff69c-bdd9-42c7-9404-eb267edb67b5-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-98vxk\" (UID: \"e83ff69c-bdd9-42c7-9404-eb267edb67b5\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.198395 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/962b8225-d957-46fa-bbde-052b0a0f8441-console-oauth-config\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.208438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5hvt\" (UniqueName: \"kubernetes.io/projected/962b8225-d957-46fa-bbde-052b0a0f8441-kube-api-access-p5hvt\") pod \"console-7ffb44489c-m6dch\" (UID: \"962b8225-d957-46fa-bbde-052b0a0f8441\") " pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.318073 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.471749 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" Feb 16 21:14:15 crc kubenswrapper[4805]: I0216 21:14:15.485998 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.843836 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ntwbd"] Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.845334 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.861741 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jtmkd"] Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.863711 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.864846 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.865017 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-vxvhv" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.865341 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.895260 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ntwbd"] Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.904478 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jtmkd"] Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.936161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/127a1d16-9779-4760-88eb-28d61312ef0f-var-log-ovn\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.936236 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127a1d16-9779-4760-88eb-28d61312ef0f-combined-ca-bundle\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.936380 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9kj9\" (UniqueName: \"kubernetes.io/projected/127a1d16-9779-4760-88eb-28d61312ef0f-kube-api-access-z9kj9\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.936477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/127a1d16-9779-4760-88eb-28d61312ef0f-scripts\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.936563 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/127a1d16-9779-4760-88eb-28d61312ef0f-var-run-ovn\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.936597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/127a1d16-9779-4760-88eb-28d61312ef0f-var-run\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.936865 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a1d16-9779-4760-88eb-28d61312ef0f-ovn-controller-tls-certs\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.992566 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.994600 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.997294 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.997430 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.997589 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.997591 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 21:14:16 crc kubenswrapper[4805]: I0216 21:14:16.997645 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-sbbfk" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.007301 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-var-lib\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9kj9\" (UniqueName: \"kubernetes.io/projected/127a1d16-9779-4760-88eb-28d61312ef0f-kube-api-access-z9kj9\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/127a1d16-9779-4760-88eb-28d61312ef0f-scripts\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049591 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/127a1d16-9779-4760-88eb-28d61312ef0f-var-run-ovn\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/127a1d16-9779-4760-88eb-28d61312ef0f-var-run\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-etc-ovs\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049673 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/faacbcd6-a65d-46c0-9173-f96b12b74793-scripts\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049754 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqwls\" (UniqueName: \"kubernetes.io/projected/faacbcd6-a65d-46c0-9173-f96b12b74793-kube-api-access-fqwls\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-var-run\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049841 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a1d16-9779-4760-88eb-28d61312ef0f-ovn-controller-tls-certs\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049871 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/127a1d16-9779-4760-88eb-28d61312ef0f-var-log-ovn\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127a1d16-9779-4760-88eb-28d61312ef0f-combined-ca-bundle\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.049923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-var-log\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.050154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/127a1d16-9779-4760-88eb-28d61312ef0f-var-run-ovn\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.050259 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/127a1d16-9779-4760-88eb-28d61312ef0f-var-run\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.050458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/127a1d16-9779-4760-88eb-28d61312ef0f-var-log-ovn\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.051998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/127a1d16-9779-4760-88eb-28d61312ef0f-scripts\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.061542 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127a1d16-9779-4760-88eb-28d61312ef0f-combined-ca-bundle\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.064533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a1d16-9779-4760-88eb-28d61312ef0f-ovn-controller-tls-certs\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.077387 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9kj9\" (UniqueName: \"kubernetes.io/projected/127a1d16-9779-4760-88eb-28d61312ef0f-kube-api-access-z9kj9\") pod \"ovn-controller-ntwbd\" (UID: \"127a1d16-9779-4760-88eb-28d61312ef0f\") " pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.151384 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-var-lib\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.151840 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/081f6c0e-a934-4a00-8be2-8bc55acb9585-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.151869 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/081f6c0e-a934-4a00-8be2-8bc55acb9585-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.151891 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/081f6c0e-a934-4a00-8be2-8bc55acb9585-config\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.151926 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgk5h\" (UniqueName: \"kubernetes.io/projected/081f6c0e-a934-4a00-8be2-8bc55acb9585-kube-api-access-rgk5h\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.151957 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/081f6c0e-a934-4a00-8be2-8bc55acb9585-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.151988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-etc-ovs\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.152017 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/081f6c0e-a934-4a00-8be2-8bc55acb9585-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.152042 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/faacbcd6-a65d-46c0-9173-f96b12b74793-scripts\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.152093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqwls\" (UniqueName: \"kubernetes.io/projected/faacbcd6-a65d-46c0-9173-f96b12b74793-kube-api-access-fqwls\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.152139 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-var-run\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.152209 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081f6c0e-a934-4a00-8be2-8bc55acb9585-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.152241 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-var-log\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.152268 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1036662f-f518-4d08-a558-854ac1bc009e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1036662f-f518-4d08-a558-854ac1bc009e\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.152873 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-var-lib\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.153069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-etc-ovs\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.153145 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-var-run\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.153238 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/faacbcd6-a65d-46c0-9173-f96b12b74793-var-log\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.155258 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/faacbcd6-a65d-46c0-9173-f96b12b74793-scripts\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.170594 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqwls\" (UniqueName: \"kubernetes.io/projected/faacbcd6-a65d-46c0-9173-f96b12b74793-kube-api-access-fqwls\") pod \"ovn-controller-ovs-jtmkd\" (UID: \"faacbcd6-a65d-46c0-9173-f96b12b74793\") " pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.179204 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.198819 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.253552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/081f6c0e-a934-4a00-8be2-8bc55acb9585-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.253593 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/081f6c0e-a934-4a00-8be2-8bc55acb9585-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.253613 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/081f6c0e-a934-4a00-8be2-8bc55acb9585-config\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.253640 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgk5h\" (UniqueName: \"kubernetes.io/projected/081f6c0e-a934-4a00-8be2-8bc55acb9585-kube-api-access-rgk5h\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.253662 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/081f6c0e-a934-4a00-8be2-8bc55acb9585-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.253691 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/081f6c0e-a934-4a00-8be2-8bc55acb9585-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.253821 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081f6c0e-a934-4a00-8be2-8bc55acb9585-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.253864 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1036662f-f518-4d08-a558-854ac1bc009e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1036662f-f518-4d08-a558-854ac1bc009e\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.255289 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/081f6c0e-a934-4a00-8be2-8bc55acb9585-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.255534 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/081f6c0e-a934-4a00-8be2-8bc55acb9585-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.256213 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/081f6c0e-a934-4a00-8be2-8bc55acb9585-config\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.265492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/081f6c0e-a934-4a00-8be2-8bc55acb9585-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.279488 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/081f6c0e-a934-4a00-8be2-8bc55acb9585-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.279937 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081f6c0e-a934-4a00-8be2-8bc55acb9585-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.280226 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.280249 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1036662f-f518-4d08-a558-854ac1bc009e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1036662f-f518-4d08-a558-854ac1bc009e\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4b436324697a27789c4298e1ad44245bc78b418e0d68ec011d797b2fe5017f81/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.282431 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgk5h\" (UniqueName: \"kubernetes.io/projected/081f6c0e-a934-4a00-8be2-8bc55acb9585-kube-api-access-rgk5h\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.337669 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1036662f-f518-4d08-a558-854ac1bc009e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1036662f-f518-4d08-a558-854ac1bc009e\") pod \"ovsdbserver-nb-0\" (UID: \"081f6c0e-a934-4a00-8be2-8bc55acb9585\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:17 crc kubenswrapper[4805]: I0216 21:14:17.341909 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.506271 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.508978 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.511351 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.512432 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.512527 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.512780 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fkf2x" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.529887 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.569862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd","Type":"ContainerStarted","Data":"fc70b4c7bea5abb72e9355a883f4cbc19ce14a68cf0c8a723778dc50f5022ce8"} Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.626014 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a7894c-709c-47a8-990f-b051e2199694-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.626178 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a7894c-709c-47a8-990f-b051e2199694-config\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.626264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01a7894c-709c-47a8-990f-b051e2199694-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.626529 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65de8e81-1d77-4be9-b52e-c47006110ec1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65de8e81-1d77-4be9-b52e-c47006110ec1\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.626675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7n5d\" (UniqueName: \"kubernetes.io/projected/01a7894c-709c-47a8-990f-b051e2199694-kube-api-access-n7n5d\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.627644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01a7894c-709c-47a8-990f-b051e2199694-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.627702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a7894c-709c-47a8-990f-b051e2199694-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.627977 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a7894c-709c-47a8-990f-b051e2199694-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.730452 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01a7894c-709c-47a8-990f-b051e2199694-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.730513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a7894c-709c-47a8-990f-b051e2199694-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.730560 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a7894c-709c-47a8-990f-b051e2199694-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.730668 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a7894c-709c-47a8-990f-b051e2199694-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.730694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a7894c-709c-47a8-990f-b051e2199694-config\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.730753 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01a7894c-709c-47a8-990f-b051e2199694-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.730867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-65de8e81-1d77-4be9-b52e-c47006110ec1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65de8e81-1d77-4be9-b52e-c47006110ec1\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.730954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7n5d\" (UniqueName: \"kubernetes.io/projected/01a7894c-709c-47a8-990f-b051e2199694-kube-api-access-n7n5d\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.734713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01a7894c-709c-47a8-990f-b051e2199694-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.735272 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a7894c-709c-47a8-990f-b051e2199694-config\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.736378 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.736412 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-65de8e81-1d77-4be9-b52e-c47006110ec1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65de8e81-1d77-4be9-b52e-c47006110ec1\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0d9199a6f3b1e2ee8bd457bd9e818423b9bc609a60df3171d8b4f6cf985bfae7/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.739173 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a7894c-709c-47a8-990f-b051e2199694-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.740256 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01a7894c-709c-47a8-990f-b051e2199694-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.741688 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a7894c-709c-47a8-990f-b051e2199694-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.746451 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7n5d\" (UniqueName: \"kubernetes.io/projected/01a7894c-709c-47a8-990f-b051e2199694-kube-api-access-n7n5d\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.748106 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a7894c-709c-47a8-990f-b051e2199694-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.785549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-65de8e81-1d77-4be9-b52e-c47006110ec1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65de8e81-1d77-4be9-b52e-c47006110ec1\") pod \"ovsdbserver-sb-0\" (UID: \"01a7894c-709c-47a8-990f-b051e2199694\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:20 crc kubenswrapper[4805]: I0216 21:14:20.833408 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:34 crc kubenswrapper[4805]: E0216 21:14:34.906330 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:14:34 crc kubenswrapper[4805]: E0216 21:14:34.906930 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j25zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-9j79x_openstack(2a328213-f410-43cc-8dd7-51a427a4d7c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:14:34 crc kubenswrapper[4805]: E0216 21:14:34.908323 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-9j79x" podUID="2a328213-f410-43cc-8dd7-51a427a4d7c3" Feb 16 21:14:34 crc kubenswrapper[4805]: E0216 21:14:34.933561 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:14:34 crc kubenswrapper[4805]: E0216 21:14:34.933761 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftkkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mpts2_openstack(1111a442-f485-494e-9f39-cc197d623c31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:14:34 crc kubenswrapper[4805]: E0216 21:14:34.935008 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" podUID="1111a442-f485-494e-9f39-cc197d623c31" Feb 16 21:14:35 crc kubenswrapper[4805]: E0216 21:14:35.015408 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:14:35 crc kubenswrapper[4805]: E0216 21:14:35.015581 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-kxcsp_openstack(e1b6981b-31c5-4bde-a3a3-b76721a723a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:14:35 crc kubenswrapper[4805]: E0216 21:14:35.015874 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:14:35 crc kubenswrapper[4805]: E0216 21:14:35.015964 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s79wm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-sd5gx_openstack(97bb059a-29e0-4433-a40c-9750353a0f18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:14:35 crc kubenswrapper[4805]: E0216 21:14:35.017527 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" podUID="97bb059a-29e0-4433-a40c-9750353a0f18" Feb 16 21:14:35 crc kubenswrapper[4805]: E0216 21:14:35.017583 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" podUID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" Feb 16 21:14:35 crc kubenswrapper[4805]: I0216 21:14:35.711807 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7ffb44489c-m6dch"] Feb 16 21:14:35 crc kubenswrapper[4805]: E0216 21:14:35.735516 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-9j79x" podUID="2a328213-f410-43cc-8dd7-51a427a4d7c3" Feb 16 21:14:35 crc kubenswrapper[4805]: E0216 21:14:35.737755 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" podUID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" Feb 16 21:14:35 crc kubenswrapper[4805]: I0216 21:14:35.764190 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk"] Feb 16 21:14:35 crc kubenswrapper[4805]: I0216 21:14:35.796538 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ntwbd"] Feb 16 21:14:35 crc kubenswrapper[4805]: I0216 21:14:35.860353 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.123235 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:14:36 crc kubenswrapper[4805]: W0216 21:14:36.289968 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a7894c_709c_47a8_990f_b051e2199694.slice/crio-81aacc8433ae85fa219baa3007df350bd3613e3b6c71a9aa10a0b0d5338a26ba WatchSource:0}: Error finding container 81aacc8433ae85fa219baa3007df350bd3613e3b6c71a9aa10a0b0d5338a26ba: Status 404 returned error can't find the container with id 81aacc8433ae85fa219baa3007df350bd3613e3b6c71a9aa10a0b0d5338a26ba Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.621840 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.651680 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.695089 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-config\") pod \"1111a442-f485-494e-9f39-cc197d623c31\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.695377 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s79wm\" (UniqueName: \"kubernetes.io/projected/97bb059a-29e0-4433-a40c-9750353a0f18-kube-api-access-s79wm\") pod \"97bb059a-29e0-4433-a40c-9750353a0f18\" (UID: \"97bb059a-29e0-4433-a40c-9750353a0f18\") " Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.695458 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-dns-svc\") pod \"1111a442-f485-494e-9f39-cc197d623c31\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.695899 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb059a-29e0-4433-a40c-9750353a0f18-config\") pod \"97bb059a-29e0-4433-a40c-9750353a0f18\" (UID: \"97bb059a-29e0-4433-a40c-9750353a0f18\") " Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.695963 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftkkp\" (UniqueName: \"kubernetes.io/projected/1111a442-f485-494e-9f39-cc197d623c31-kube-api-access-ftkkp\") pod \"1111a442-f485-494e-9f39-cc197d623c31\" (UID: \"1111a442-f485-494e-9f39-cc197d623c31\") " Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.695977 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-config" (OuterVolumeSpecName: "config") pod "1111a442-f485-494e-9f39-cc197d623c31" (UID: "1111a442-f485-494e-9f39-cc197d623c31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.696692 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.697386 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1111a442-f485-494e-9f39-cc197d623c31" (UID: "1111a442-f485-494e-9f39-cc197d623c31"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.698620 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bb059a-29e0-4433-a40c-9750353a0f18-config" (OuterVolumeSpecName: "config") pod "97bb059a-29e0-4433-a40c-9750353a0f18" (UID: "97bb059a-29e0-4433-a40c-9750353a0f18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.703167 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bb059a-29e0-4433-a40c-9750353a0f18-kube-api-access-s79wm" (OuterVolumeSpecName: "kube-api-access-s79wm") pod "97bb059a-29e0-4433-a40c-9750353a0f18" (UID: "97bb059a-29e0-4433-a40c-9750353a0f18"). InnerVolumeSpecName "kube-api-access-s79wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.708103 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1111a442-f485-494e-9f39-cc197d623c31-kube-api-access-ftkkp" (OuterVolumeSpecName: "kube-api-access-ftkkp") pod "1111a442-f485-494e-9f39-cc197d623c31" (UID: "1111a442-f485-494e-9f39-cc197d623c31"). InnerVolumeSpecName "kube-api-access-ftkkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.745061 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"95a93760-333e-4689-a64c-c3534a04cec0","Type":"ContainerStarted","Data":"7aeeed8f72d2e51caa4f2b0119cd92aa83ce279f4caef23c61ee0897a9f4e84f"} Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.746796 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7ffb44489c-m6dch" event={"ID":"962b8225-d957-46fa-bbde-052b0a0f8441","Type":"ContainerStarted","Data":"a708f49267daa6ddfecca0fda64610ed065c1e2facf59dc42a3b35a7cb8360de"} Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.749329 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"01a7894c-709c-47a8-990f-b051e2199694","Type":"ContainerStarted","Data":"81aacc8433ae85fa219baa3007df350bd3613e3b6c71a9aa10a0b0d5338a26ba"} Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.750180 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerStarted","Data":"d8c4d866361da01f71ce6a2c664fd126f578bc2249601c1655e57b5782762dac"} Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.751130 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8b9deffe-ab6a-46d4-a463-9ed81e6f3889","Type":"ContainerStarted","Data":"c5c9147e7188248ccc66b2936c8ec76c5cedd9f5e8ef92e41b7a504c902b1d9b"} Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.753339 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ntwbd" event={"ID":"127a1d16-9779-4760-88eb-28d61312ef0f","Type":"ContainerStarted","Data":"31c3169fed0a991501714b2dede072b239bd2ea1944295b7ca6b52e7b0791578"} Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.754654 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" event={"ID":"97bb059a-29e0-4433-a40c-9750353a0f18","Type":"ContainerDied","Data":"59767a3a470b05603f0792425a7901d98eb88b66ff97fcc56ec44362c4828fee"} Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.754743 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sd5gx" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.757222 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" event={"ID":"e83ff69c-bdd9-42c7-9404-eb267edb67b5","Type":"ContainerStarted","Data":"5e28f55b6722f262a45d6748435f7fd40bfc1853f2b240f4ff8734185955fc17"} Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.758218 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" event={"ID":"1111a442-f485-494e-9f39-cc197d623c31","Type":"ContainerDied","Data":"b53feb7e2171df9a64f435487a903552a8606e6053277fa616c20c7f18d426a2"} Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.758331 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mpts2" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.801360 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s79wm\" (UniqueName: \"kubernetes.io/projected/97bb059a-29e0-4433-a40c-9750353a0f18-kube-api-access-s79wm\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.801394 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1111a442-f485-494e-9f39-cc197d623c31-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.801408 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb059a-29e0-4433-a40c-9750353a0f18-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.801421 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftkkp\" (UniqueName: \"kubernetes.io/projected/1111a442-f485-494e-9f39-cc197d623c31-kube-api-access-ftkkp\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.885784 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mpts2"] Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.895690 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mpts2"] Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.915109 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sd5gx"] Feb 16 21:14:36 crc kubenswrapper[4805]: I0216 21:14:36.930335 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sd5gx"] Feb 16 21:14:37 crc kubenswrapper[4805]: I0216 21:14:37.157682 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jtmkd"] Feb 16 21:14:37 crc kubenswrapper[4805]: W0216 21:14:37.162520 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaacbcd6_a65d_46c0_9173_f96b12b74793.slice/crio-9c122025fc47ec9330200b36822c6dcc65333313605fabb3bb9bddb83b97a0ca WatchSource:0}: Error finding container 9c122025fc47ec9330200b36822c6dcc65333313605fabb3bb9bddb83b97a0ca: Status 404 returned error can't find the container with id 9c122025fc47ec9330200b36822c6dcc65333313605fabb3bb9bddb83b97a0ca Feb 16 21:14:37 crc kubenswrapper[4805]: I0216 21:14:37.617736 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1111a442-f485-494e-9f39-cc197d623c31" path="/var/lib/kubelet/pods/1111a442-f485-494e-9f39-cc197d623c31/volumes" Feb 16 21:14:37 crc kubenswrapper[4805]: I0216 21:14:37.618777 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97bb059a-29e0-4433-a40c-9750353a0f18" path="/var/lib/kubelet/pods/97bb059a-29e0-4433-a40c-9750353a0f18/volumes" Feb 16 21:14:37 crc kubenswrapper[4805]: I0216 21:14:37.769821 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"63c31b7f-0d91-4d04-87b2-2f85a7baf260","Type":"ContainerStarted","Data":"59319c18b61e71208b4b7f8f801b64c0ee5f5d597354702037d04e7d6ffbdb5b"} Feb 16 21:14:37 crc kubenswrapper[4805]: I0216 21:14:37.770054 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 21:14:37 crc kubenswrapper[4805]: I0216 21:14:37.771608 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7ffb44489c-m6dch" event={"ID":"962b8225-d957-46fa-bbde-052b0a0f8441","Type":"ContainerStarted","Data":"7a18d9393730109e73e60e550cf1a998ce5728e5148c8d85de246ead9c8504c7"} Feb 16 21:14:37 crc kubenswrapper[4805]: I0216 21:14:37.772778 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jtmkd" event={"ID":"faacbcd6-a65d-46c0-9173-f96b12b74793","Type":"ContainerStarted","Data":"9c122025fc47ec9330200b36822c6dcc65333313605fabb3bb9bddb83b97a0ca"} Feb 16 21:14:37 crc kubenswrapper[4805]: I0216 21:14:37.792914 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.858987969 podStartE2EDuration="27.792894386s" podCreationTimestamp="2026-02-16 21:14:10 +0000 UTC" firstStartedPulling="2026-02-16 21:14:12.207908231 +0000 UTC m=+1070.026591516" lastFinishedPulling="2026-02-16 21:14:35.141814638 +0000 UTC m=+1092.960497933" observedRunningTime="2026-02-16 21:14:37.783008249 +0000 UTC m=+1095.601691544" watchObservedRunningTime="2026-02-16 21:14:37.792894386 +0000 UTC m=+1095.611577681" Feb 16 21:14:37 crc kubenswrapper[4805]: I0216 21:14:37.812061 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7ffb44489c-m6dch" podStartSLOduration=23.812032924 podStartE2EDuration="23.812032924s" podCreationTimestamp="2026-02-16 21:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:37.8037365 +0000 UTC m=+1095.622419815" watchObservedRunningTime="2026-02-16 21:14:37.812032924 +0000 UTC m=+1095.630716219" Feb 16 21:14:38 crc kubenswrapper[4805]: I0216 21:14:38.099800 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:14:38 crc kubenswrapper[4805]: I0216 21:14:38.099864 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:14:38 crc kubenswrapper[4805]: I0216 21:14:38.150952 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:14:38 crc kubenswrapper[4805]: W0216 21:14:38.360562 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod081f6c0e_a934_4a00_8be2_8bc55acb9585.slice/crio-5ccbd3e43a0e05a3791d9dfb7db62a1aec66e2b1b58a2adc52093bb1eba6d135 WatchSource:0}: Error finding container 5ccbd3e43a0e05a3791d9dfb7db62a1aec66e2b1b58a2adc52093bb1eba6d135: Status 404 returned error can't find the container with id 5ccbd3e43a0e05a3791d9dfb7db62a1aec66e2b1b58a2adc52093bb1eba6d135 Feb 16 21:14:38 crc kubenswrapper[4805]: I0216 21:14:38.783588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26f1c84d-9566-4135-a24a-ce299c76a102","Type":"ContainerStarted","Data":"23b8bee0ab782e764656f5db67750b996caf777a158c3cba690daae1912926c6"} Feb 16 21:14:38 crc kubenswrapper[4805]: I0216 21:14:38.786035 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"081f6c0e-a934-4a00-8be2-8bc55acb9585","Type":"ContainerStarted","Data":"5ccbd3e43a0e05a3791d9dfb7db62a1aec66e2b1b58a2adc52093bb1eba6d135"} Feb 16 21:14:38 crc kubenswrapper[4805]: I0216 21:14:38.787898 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f897110-86a6-4edb-a453-a1322e0a580f","Type":"ContainerStarted","Data":"450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9"} Feb 16 21:14:38 crc kubenswrapper[4805]: I0216 21:14:38.790277 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a48053f-4668-43af-bda4-7af014d6457d","Type":"ContainerStarted","Data":"bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d"} Feb 16 21:14:38 crc kubenswrapper[4805]: I0216 21:14:38.792697 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14fe6c77-adbd-4abe-9aff-7bb72474d47b","Type":"ContainerStarted","Data":"5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877"} Feb 16 21:14:39 crc kubenswrapper[4805]: I0216 21:14:39.806521 4805 generic.go:334] "Generic (PLEG): container finished" podID="8b9deffe-ab6a-46d4-a463-9ed81e6f3889" containerID="c5c9147e7188248ccc66b2936c8ec76c5cedd9f5e8ef92e41b7a504c902b1d9b" exitCode=0 Feb 16 21:14:39 crc kubenswrapper[4805]: I0216 21:14:39.807898 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8b9deffe-ab6a-46d4-a463-9ed81e6f3889","Type":"ContainerDied","Data":"c5c9147e7188248ccc66b2936c8ec76c5cedd9f5e8ef92e41b7a504c902b1d9b"} Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.831279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd","Type":"ContainerStarted","Data":"33484dc35b67d5539f060def9b4ee2eac83b9d86c0cc5a9d1ea82a3904506c8f"} Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.831926 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.833763 4805 generic.go:334] "Generic (PLEG): container finished" podID="26f1c84d-9566-4135-a24a-ce299c76a102" containerID="23b8bee0ab782e764656f5db67750b996caf777a158c3cba690daae1912926c6" exitCode=0 Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.833842 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26f1c84d-9566-4135-a24a-ce299c76a102","Type":"ContainerDied","Data":"23b8bee0ab782e764656f5db67750b996caf777a158c3cba690daae1912926c6"} Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.837026 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" event={"ID":"e83ff69c-bdd9-42c7-9404-eb267edb67b5","Type":"ContainerStarted","Data":"d2391382e92ac67ba16cff87d8f348116482546b56860b87f04daa56fc854e4c"} Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.838676 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jtmkd" event={"ID":"faacbcd6-a65d-46c0-9173-f96b12b74793","Type":"ContainerStarted","Data":"3bec6ffa0c78dff4dd39300a988310ea46ba58efc371fa7ba60414ebb1b84bf6"} Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.847807 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.872624761 podStartE2EDuration="28.847791496s" podCreationTimestamp="2026-02-16 21:14:13 +0000 UTC" firstStartedPulling="2026-02-16 21:14:20.3496563 +0000 UTC m=+1078.168339605" lastFinishedPulling="2026-02-16 21:14:41.324823015 +0000 UTC m=+1099.143506340" observedRunningTime="2026-02-16 21:14:41.844248308 +0000 UTC m=+1099.662931613" watchObservedRunningTime="2026-02-16 21:14:41.847791496 +0000 UTC m=+1099.666474791" Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.850972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"081f6c0e-a934-4a00-8be2-8bc55acb9585","Type":"ContainerStarted","Data":"e9e56cda34face48bd3f08f2bd8ab23d51558b3a539a47b1e0b7918a8e92c3f9"} Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.855748 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8b9deffe-ab6a-46d4-a463-9ed81e6f3889","Type":"ContainerStarted","Data":"14901c3f8e72ccc71f109ff56bff6c5e5023736a45fc53e1659056efe8eb1769"} Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.871682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ntwbd" event={"ID":"127a1d16-9779-4760-88eb-28d61312ef0f","Type":"ContainerStarted","Data":"e4ca0e7d14c14e775e85b5182e152078365c7122de6f3c07118858445966a0f3"} Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.872636 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ntwbd" Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.875440 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"01a7894c-709c-47a8-990f-b051e2199694","Type":"ContainerStarted","Data":"d70ef545a4c6266ee7806d1c8346b8654eeadcc02bcb9583f1310cec2f7101d4"} Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.913670 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-98vxk" podStartSLOduration=22.928457952 podStartE2EDuration="27.913648674s" podCreationTimestamp="2026-02-16 21:14:14 +0000 UTC" firstStartedPulling="2026-02-16 21:14:36.063864899 +0000 UTC m=+1093.882548194" lastFinishedPulling="2026-02-16 21:14:41.049055621 +0000 UTC m=+1098.867738916" observedRunningTime="2026-02-16 21:14:41.90805434 +0000 UTC m=+1099.726737635" watchObservedRunningTime="2026-02-16 21:14:41.913648674 +0000 UTC m=+1099.732331969" Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.962334 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.508426827 podStartE2EDuration="33.962316368s" podCreationTimestamp="2026-02-16 21:14:08 +0000 UTC" firstStartedPulling="2026-02-16 21:14:10.4062903 +0000 UTC m=+1068.224973595" lastFinishedPulling="2026-02-16 21:14:34.860179841 +0000 UTC m=+1092.678863136" observedRunningTime="2026-02-16 21:14:41.957833524 +0000 UTC m=+1099.776516819" watchObservedRunningTime="2026-02-16 21:14:41.962316368 +0000 UTC m=+1099.780999653" Feb 16 21:14:41 crc kubenswrapper[4805]: I0216 21:14:41.977249 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ntwbd" podStartSLOduration=20.686170427 podStartE2EDuration="25.977230269s" podCreationTimestamp="2026-02-16 21:14:16 +0000 UTC" firstStartedPulling="2026-02-16 21:14:36.03433776 +0000 UTC m=+1093.853021055" lastFinishedPulling="2026-02-16 21:14:41.325397602 +0000 UTC m=+1099.144080897" observedRunningTime="2026-02-16 21:14:41.9761252 +0000 UTC m=+1099.794808505" watchObservedRunningTime="2026-02-16 21:14:41.977230269 +0000 UTC m=+1099.795913564" Feb 16 21:14:42 crc kubenswrapper[4805]: I0216 21:14:42.898420 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26f1c84d-9566-4135-a24a-ce299c76a102","Type":"ContainerStarted","Data":"5b1940f3f28c39bd7fa3fa3cf379b36ecffd0039266209052818e109e3c14860"} Feb 16 21:14:42 crc kubenswrapper[4805]: I0216 21:14:42.902801 4805 generic.go:334] "Generic (PLEG): container finished" podID="faacbcd6-a65d-46c0-9173-f96b12b74793" containerID="3bec6ffa0c78dff4dd39300a988310ea46ba58efc371fa7ba60414ebb1b84bf6" exitCode=0 Feb 16 21:14:42 crc kubenswrapper[4805]: I0216 21:14:42.902980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jtmkd" event={"ID":"faacbcd6-a65d-46c0-9173-f96b12b74793","Type":"ContainerDied","Data":"3bec6ffa0c78dff4dd39300a988310ea46ba58efc371fa7ba60414ebb1b84bf6"} Feb 16 21:14:42 crc kubenswrapper[4805]: I0216 21:14:42.932930 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.419829965 podStartE2EDuration="33.932911378s" podCreationTimestamp="2026-02-16 21:14:09 +0000 UTC" firstStartedPulling="2026-02-16 21:14:12.103397664 +0000 UTC m=+1069.922080959" lastFinishedPulling="2026-02-16 21:14:35.616479077 +0000 UTC m=+1093.435162372" observedRunningTime="2026-02-16 21:14:42.92393561 +0000 UTC m=+1100.742618995" watchObservedRunningTime="2026-02-16 21:14:42.932911378 +0000 UTC m=+1100.751594673" Feb 16 21:14:43 crc kubenswrapper[4805]: I0216 21:14:43.913370 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jtmkd" event={"ID":"faacbcd6-a65d-46c0-9173-f96b12b74793","Type":"ContainerStarted","Data":"f53e3fa78a3aa4a067bac8bf3d85d2748f937d9e920da3d1f5d4a9a111e1e702"} Feb 16 21:14:43 crc kubenswrapper[4805]: I0216 21:14:43.913928 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jtmkd" event={"ID":"faacbcd6-a65d-46c0-9173-f96b12b74793","Type":"ContainerStarted","Data":"d1b146d4ba543076ee09f138262292a545724f33930cc9bfefc810d1d253fa1c"} Feb 16 21:14:43 crc kubenswrapper[4805]: I0216 21:14:43.913945 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:43 crc kubenswrapper[4805]: I0216 21:14:43.913956 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:14:43 crc kubenswrapper[4805]: I0216 21:14:43.915622 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"081f6c0e-a934-4a00-8be2-8bc55acb9585","Type":"ContainerStarted","Data":"416a61b7ff51f39aedcae9926f432ff6719e07938976a5851c96de5215ceee55"} Feb 16 21:14:43 crc kubenswrapper[4805]: I0216 21:14:43.917572 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"01a7894c-709c-47a8-990f-b051e2199694","Type":"ContainerStarted","Data":"d3a66afc879f60f2c7cf88748aa98e38b7b0c0b7a08cdd2c8c1736cbb8897ca4"} Feb 16 21:14:43 crc kubenswrapper[4805]: I0216 21:14:43.944874 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jtmkd" podStartSLOduration=23.723725994 podStartE2EDuration="27.94484687s" podCreationTimestamp="2026-02-16 21:14:16 +0000 UTC" firstStartedPulling="2026-02-16 21:14:37.165529737 +0000 UTC m=+1094.984213032" lastFinishedPulling="2026-02-16 21:14:41.386650603 +0000 UTC m=+1099.205333908" observedRunningTime="2026-02-16 21:14:43.935254565 +0000 UTC m=+1101.753937880" watchObservedRunningTime="2026-02-16 21:14:43.94484687 +0000 UTC m=+1101.763530185" Feb 16 21:14:43 crc kubenswrapper[4805]: I0216 21:14:43.964142 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.354049361 podStartE2EDuration="24.964124122s" podCreationTimestamp="2026-02-16 21:14:19 +0000 UTC" firstStartedPulling="2026-02-16 21:14:36.298934637 +0000 UTC m=+1094.117617922" lastFinishedPulling="2026-02-16 21:14:42.909009388 +0000 UTC m=+1100.727692683" observedRunningTime="2026-02-16 21:14:43.95753002 +0000 UTC m=+1101.776213315" watchObservedRunningTime="2026-02-16 21:14:43.964124122 +0000 UTC m=+1101.782807417" Feb 16 21:14:43 crc kubenswrapper[4805]: I0216 21:14:43.982150 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.446948044 podStartE2EDuration="28.982131319s" podCreationTimestamp="2026-02-16 21:14:15 +0000 UTC" firstStartedPulling="2026-02-16 21:14:38.365438562 +0000 UTC m=+1096.184121847" lastFinishedPulling="2026-02-16 21:14:42.900621827 +0000 UTC m=+1100.719305122" observedRunningTime="2026-02-16 21:14:43.976776251 +0000 UTC m=+1101.795459556" watchObservedRunningTime="2026-02-16 21:14:43.982131319 +0000 UTC m=+1101.800814614" Feb 16 21:14:44 crc kubenswrapper[4805]: I0216 21:14:44.342564 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:44 crc kubenswrapper[4805]: I0216 21:14:44.415668 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:44 crc kubenswrapper[4805]: I0216 21:14:44.834304 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:44 crc kubenswrapper[4805]: I0216 21:14:44.893643 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:44 crc kubenswrapper[4805]: I0216 21:14:44.928119 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerStarted","Data":"ad495408810070426871cbc7524fae8858fbd209987c836aa20bd539abee8f91"} Feb 16 21:14:44 crc kubenswrapper[4805]: I0216 21:14:44.928851 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:44 crc kubenswrapper[4805]: I0216 21:14:44.929006 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:45 crc kubenswrapper[4805]: I0216 21:14:45.320020 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:45 crc kubenswrapper[4805]: I0216 21:14:45.320096 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:45 crc kubenswrapper[4805]: I0216 21:14:45.327305 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:45 crc kubenswrapper[4805]: I0216 21:14:45.949296 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7ffb44489c-m6dch" Feb 16 21:14:46 crc kubenswrapper[4805]: I0216 21:14:46.047919 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cb64748c-ggr6n"] Feb 16 21:14:46 crc kubenswrapper[4805]: I0216 21:14:46.311246 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.397069 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.708532 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9j79x"] Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.791976 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ljkcf"] Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.795758 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.801462 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.818078 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hch2z"] Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.823047 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.826478 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.833042 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ljkcf"] Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.853698 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hch2z"] Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.878489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.878638 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp4bc\" (UniqueName: \"kubernetes.io/projected/bd3976a1-6498-480d-a4d9-ebec8797c16d-kube-api-access-sp4bc\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.878675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.878776 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-config\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.974118 4805 generic.go:334] "Generic (PLEG): container finished" podID="2a328213-f410-43cc-8dd7-51a427a4d7c3" containerID="c12456b60a1dcadf7e89ea46eb85cfefbc2402897d5ce72474e6f7f3b2df7d2f" exitCode=0 Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.974503 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9j79x" event={"ID":"2a328213-f410-43cc-8dd7-51a427a4d7c3","Type":"ContainerDied","Data":"c12456b60a1dcadf7e89ea46eb85cfefbc2402897d5ce72474e6f7f3b2df7d2f"} Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.977065 4805 generic.go:334] "Generic (PLEG): container finished" podID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" containerID="221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463" exitCode=0 Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.977093 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" event={"ID":"e1b6981b-31c5-4bde-a3a3-b76721a723a7","Type":"ContainerDied","Data":"221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463"} Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979601 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d56588-d2f3-4207-8338-c39de08d752b-combined-ca-bundle\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979636 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/45d56588-d2f3-4207-8338-c39de08d752b-ovs-rundir\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979665 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979696 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/45d56588-d2f3-4207-8338-c39de08d752b-ovn-rundir\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979747 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp4bc\" (UniqueName: \"kubernetes.io/projected/bd3976a1-6498-480d-a4d9-ebec8797c16d-kube-api-access-sp4bc\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979764 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d56588-d2f3-4207-8338-c39de08d752b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979834 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d56588-d2f3-4207-8338-c39de08d752b-config\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdgh\" (UniqueName: \"kubernetes.io/projected/45d56588-d2f3-4207-8338-c39de08d752b-kube-api-access-lxdgh\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.979875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-config\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.980615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-config\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.981161 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.981883 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:47 crc kubenswrapper[4805]: I0216 21:14:47.995714 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kxcsp"] Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.000066 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp4bc\" (UniqueName: \"kubernetes.io/projected/bd3976a1-6498-480d-a4d9-ebec8797c16d-kube-api-access-sp4bc\") pod \"dnsmasq-dns-5bf47b49b7-ljkcf\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.064688 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-kfgdf"] Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.066555 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.069400 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.081408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/45d56588-d2f3-4207-8338-c39de08d752b-ovs-rundir\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.081492 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/45d56588-d2f3-4207-8338-c39de08d752b-ovn-rundir\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.081548 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d56588-d2f3-4207-8338-c39de08d752b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.081611 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d56588-d2f3-4207-8338-c39de08d752b-config\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.081629 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxdgh\" (UniqueName: \"kubernetes.io/projected/45d56588-d2f3-4207-8338-c39de08d752b-kube-api-access-lxdgh\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.081710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d56588-d2f3-4207-8338-c39de08d752b-combined-ca-bundle\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.081735 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/45d56588-d2f3-4207-8338-c39de08d752b-ovs-rundir\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.082162 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/45d56588-d2f3-4207-8338-c39de08d752b-ovn-rundir\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.082910 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d56588-d2f3-4207-8338-c39de08d752b-config\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.087034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d56588-d2f3-4207-8338-c39de08d752b-combined-ca-bundle\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.088580 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d56588-d2f3-4207-8338-c39de08d752b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.109742 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxdgh\" (UniqueName: \"kubernetes.io/projected/45d56588-d2f3-4207-8338-c39de08d752b-kube-api-access-lxdgh\") pod \"ovn-controller-metrics-hch2z\" (UID: \"45d56588-d2f3-4207-8338-c39de08d752b\") " pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.116325 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.144971 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kfgdf"] Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.145773 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hch2z" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.188293 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-config\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.188393 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.188460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-dns-svc\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.188526 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8jt\" (UniqueName: \"kubernetes.io/projected/5eab1109-6fc8-446e-b797-fc5e11e18f5e-kube-api-access-nj8jt\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.188612 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.290115 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-config\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.290215 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.290283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-dns-svc\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.290351 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj8jt\" (UniqueName: \"kubernetes.io/projected/5eab1109-6fc8-446e-b797-fc5e11e18f5e-kube-api-access-nj8jt\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.290431 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.291849 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.294181 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.295302 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-config\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.298959 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-dns-svc\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.308344 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj8jt\" (UniqueName: \"kubernetes.io/projected/5eab1109-6fc8-446e-b797-fc5e11e18f5e-kube-api-access-nj8jt\") pod \"dnsmasq-dns-8554648995-kfgdf\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.452094 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.587055 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.595819 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-config\") pod \"2a328213-f410-43cc-8dd7-51a427a4d7c3\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.596015 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-dns-svc\") pod \"2a328213-f410-43cc-8dd7-51a427a4d7c3\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.596096 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j25zs\" (UniqueName: \"kubernetes.io/projected/2a328213-f410-43cc-8dd7-51a427a4d7c3-kube-api-access-j25zs\") pod \"2a328213-f410-43cc-8dd7-51a427a4d7c3\" (UID: \"2a328213-f410-43cc-8dd7-51a427a4d7c3\") " Feb 16 21:14:48 crc kubenswrapper[4805]: I0216 21:14:48.602915 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a328213-f410-43cc-8dd7-51a427a4d7c3-kube-api-access-j25zs" (OuterVolumeSpecName: "kube-api-access-j25zs") pod "2a328213-f410-43cc-8dd7-51a427a4d7c3" (UID: "2a328213-f410-43cc-8dd7-51a427a4d7c3"). InnerVolumeSpecName "kube-api-access-j25zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.650113 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-config" (OuterVolumeSpecName: "config") pod "2a328213-f410-43cc-8dd7-51a427a4d7c3" (UID: "2a328213-f410-43cc-8dd7-51a427a4d7c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.682890 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a328213-f410-43cc-8dd7-51a427a4d7c3" (UID: "2a328213-f410-43cc-8dd7-51a427a4d7c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.700858 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j25zs\" (UniqueName: \"kubernetes.io/projected/2a328213-f410-43cc-8dd7-51a427a4d7c3-kube-api-access-j25zs\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.700889 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.700897 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a328213-f410-43cc-8dd7-51a427a4d7c3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.741712 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hch2z"] Feb 16 21:14:49 crc kubenswrapper[4805]: W0216 21:14:48.752338 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45d56588_d2f3_4207_8338_c39de08d752b.slice/crio-a1458f61994494b9de40ec3bd6553e65e6b5c9d889eb62c03f2db8ceb811ab6c WatchSource:0}: Error finding container a1458f61994494b9de40ec3bd6553e65e6b5c9d889eb62c03f2db8ceb811ab6c: Status 404 returned error can't find the container with id a1458f61994494b9de40ec3bd6553e65e6b5c9d889eb62c03f2db8ceb811ab6c Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.836107 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ljkcf"] Feb 16 21:14:49 crc kubenswrapper[4805]: W0216 21:14:48.836870 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd3976a1_6498_480d_a4d9_ebec8797c16d.slice/crio-40370c17ecc39239ad7c9f6ae3b03cde59dc4247663c3f3281ae5b786cd23f9c WatchSource:0}: Error finding container 40370c17ecc39239ad7c9f6ae3b03cde59dc4247663c3f3281ae5b786cd23f9c: Status 404 returned error can't find the container with id 40370c17ecc39239ad7c9f6ae3b03cde59dc4247663c3f3281ae5b786cd23f9c Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.993705 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" event={"ID":"bd3976a1-6498-480d-a4d9-ebec8797c16d","Type":"ContainerStarted","Data":"40370c17ecc39239ad7c9f6ae3b03cde59dc4247663c3f3281ae5b786cd23f9c"} Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.996287 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9j79x" event={"ID":"2a328213-f410-43cc-8dd7-51a427a4d7c3","Type":"ContainerDied","Data":"c6aa92b9f441f4fc7fcf9162f38974a96d0a8ff9006644bcd7947494988cd2c8"} Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.996318 4805 scope.go:117] "RemoveContainer" containerID="c12456b60a1dcadf7e89ea46eb85cfefbc2402897d5ce72474e6f7f3b2df7d2f" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.996329 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9j79x" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:48.998323 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hch2z" event={"ID":"45d56588-d2f3-4207-8338-c39de08d752b","Type":"ContainerStarted","Data":"a1458f61994494b9de40ec3bd6553e65e6b5c9d889eb62c03f2db8ceb811ab6c"} Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.000917 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" event={"ID":"e1b6981b-31c5-4bde-a3a3-b76721a723a7","Type":"ContainerStarted","Data":"5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a"} Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.001118 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" podUID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" containerName="dnsmasq-dns" containerID="cri-o://5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a" gracePeriod=10 Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.001294 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.036372 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" podStartSLOduration=3.050920319 podStartE2EDuration="42.036354257s" podCreationTimestamp="2026-02-16 21:14:07 +0000 UTC" firstStartedPulling="2026-02-16 21:14:08.027486018 +0000 UTC m=+1065.846169313" lastFinishedPulling="2026-02-16 21:14:47.012919946 +0000 UTC m=+1104.831603251" observedRunningTime="2026-02-16 21:14:49.032092909 +0000 UTC m=+1106.850776224" watchObservedRunningTime="2026-02-16 21:14:49.036354257 +0000 UTC m=+1106.855037552" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.082618 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9j79x"] Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.093823 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9j79x"] Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.613062 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a328213-f410-43cc-8dd7-51a427a4d7c3" path="/var/lib/kubelet/pods/2a328213-f410-43cc-8dd7-51a427a4d7c3/volumes" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.637620 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.696045 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kfgdf"] Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.747088 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-config\") pod \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.747215 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-dns-svc\") pod \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.747277 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzdnb\" (UniqueName: \"kubernetes.io/projected/e1b6981b-31c5-4bde-a3a3-b76721a723a7-kube-api-access-fzdnb\") pod \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\" (UID: \"e1b6981b-31c5-4bde-a3a3-b76721a723a7\") " Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.752333 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b6981b-31c5-4bde-a3a3-b76721a723a7-kube-api-access-fzdnb" (OuterVolumeSpecName: "kube-api-access-fzdnb") pod "e1b6981b-31c5-4bde-a3a3-b76721a723a7" (UID: "e1b6981b-31c5-4bde-a3a3-b76721a723a7"). InnerVolumeSpecName "kube-api-access-fzdnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.795377 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-config" (OuterVolumeSpecName: "config") pod "e1b6981b-31c5-4bde-a3a3-b76721a723a7" (UID: "e1b6981b-31c5-4bde-a3a3-b76721a723a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.799766 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1b6981b-31c5-4bde-a3a3-b76721a723a7" (UID: "e1b6981b-31c5-4bde-a3a3-b76721a723a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.819030 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.819078 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.849836 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzdnb\" (UniqueName: \"kubernetes.io/projected/e1b6981b-31c5-4bde-a3a3-b76721a723a7-kube-api-access-fzdnb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.849864 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:49 crc kubenswrapper[4805]: I0216 21:14:49.849873 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1b6981b-31c5-4bde-a3a3-b76721a723a7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.036519 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kfgdf" event={"ID":"5eab1109-6fc8-446e-b797-fc5e11e18f5e","Type":"ContainerStarted","Data":"d9917d7bd2194f829e23c64cc0b8c56918b60973eb15604517406ef67f3e83e7"} Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.036815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kfgdf" event={"ID":"5eab1109-6fc8-446e-b797-fc5e11e18f5e","Type":"ContainerStarted","Data":"cc25d67a75fd325e7fad9fc86d1ada38d1f36600d23ff54b6ede22f6d44f6dca"} Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.038516 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hch2z" event={"ID":"45d56588-d2f3-4207-8338-c39de08d752b","Type":"ContainerStarted","Data":"2e569fd9e7e394bbb7d4eb36585fb552e6d1606f74fef78d80cd94e9300339cd"} Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.044202 4805 generic.go:334] "Generic (PLEG): container finished" podID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" containerID="5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a" exitCode=0 Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.044281 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" event={"ID":"e1b6981b-31c5-4bde-a3a3-b76721a723a7","Type":"ContainerDied","Data":"5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a"} Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.044305 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" event={"ID":"e1b6981b-31c5-4bde-a3a3-b76721a723a7","Type":"ContainerDied","Data":"94fa57a7f34c95b0537236630cf7d260f7589961b0d7dacea7a50aedcc079993"} Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.044320 4805 scope.go:117] "RemoveContainer" containerID="5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.044432 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kxcsp" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.054930 4805 generic.go:334] "Generic (PLEG): container finished" podID="bd3976a1-6498-480d-a4d9-ebec8797c16d" containerID="563d9ca381cb940c006191940e1ae5f776a85cab6812153950161babef77100c" exitCode=0 Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.054971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" event={"ID":"bd3976a1-6498-480d-a4d9-ebec8797c16d","Type":"ContainerDied","Data":"563d9ca381cb940c006191940e1ae5f776a85cab6812153950161babef77100c"} Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.140550 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hch2z" podStartSLOduration=3.140515385 podStartE2EDuration="3.140515385s" podCreationTimestamp="2026-02-16 21:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:50.137116971 +0000 UTC m=+1107.955800266" watchObservedRunningTime="2026-02-16 21:14:50.140515385 +0000 UTC m=+1107.959198680" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.180614 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.310060 4805 scope.go:117] "RemoveContainer" containerID="221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.319872 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.363832 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kxcsp"] Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.366329 4805 scope.go:117] "RemoveContainer" containerID="5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a" Feb 16 21:14:50 crc kubenswrapper[4805]: E0216 21:14:50.366697 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a\": container with ID starting with 5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a not found: ID does not exist" containerID="5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.366775 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a"} err="failed to get container status \"5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a\": rpc error: code = NotFound desc = could not find container \"5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a\": container with ID starting with 5b6bd48ca17b413e329da6673217ab40e919afcf273bbd7f1fe91e2fa8eaf47a not found: ID does not exist" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.366802 4805 scope.go:117] "RemoveContainer" containerID="221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463" Feb 16 21:14:50 crc kubenswrapper[4805]: E0216 21:14:50.369397 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463\": container with ID starting with 221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463 not found: ID does not exist" containerID="221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.369445 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463"} err="failed to get container status \"221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463\": rpc error: code = NotFound desc = could not find container \"221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463\": container with ID starting with 221ee553ba96d79173d3faa5225b0ad633fa41a270e6ad99afd9505dba375463 not found: ID does not exist" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.375143 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kxcsp"] Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.884334 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.990645 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:50 crc kubenswrapper[4805]: I0216 21:14:50.990679 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.062234 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:14:51 crc kubenswrapper[4805]: E0216 21:14:51.063130 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" containerName="dnsmasq-dns" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.065804 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" containerName="dnsmasq-dns" Feb 16 21:14:51 crc kubenswrapper[4805]: E0216 21:14:51.066016 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a328213-f410-43cc-8dd7-51a427a4d7c3" containerName="init" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.066076 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a328213-f410-43cc-8dd7-51a427a4d7c3" containerName="init" Feb 16 21:14:51 crc kubenswrapper[4805]: E0216 21:14:51.066143 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" containerName="init" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.066219 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" containerName="init" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.066545 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" containerName="dnsmasq-dns" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.066616 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a328213-f410-43cc-8dd7-51a427a4d7c3" containerName="init" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.067712 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.073362 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7btz5" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.075071 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" event={"ID":"bd3976a1-6498-480d-a4d9-ebec8797c16d","Type":"ContainerStarted","Data":"ce30475da0fb603766a1ce6568786a4eb5c0ffaf32798ee346cfbd77b873a995"} Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.075214 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.076984 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.078268 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.082867 4805 generic.go:334] "Generic (PLEG): container finished" podID="5eab1109-6fc8-446e-b797-fc5e11e18f5e" containerID="d9917d7bd2194f829e23c64cc0b8c56918b60973eb15604517406ef67f3e83e7" exitCode=0 Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.082908 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kfgdf" event={"ID":"5eab1109-6fc8-446e-b797-fc5e11e18f5e","Type":"ContainerDied","Data":"d9917d7bd2194f829e23c64cc0b8c56918b60973eb15604517406ef67f3e83e7"} Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.082887 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.086711 4805 generic.go:334] "Generic (PLEG): container finished" podID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerID="ad495408810070426871cbc7524fae8858fbd209987c836aa20bd539abee8f91" exitCode=0 Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.086772 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerDied","Data":"ad495408810070426871cbc7524fae8858fbd209987c836aa20bd539abee8f91"} Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.088073 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.190312 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" podStartSLOduration=4.190287391 podStartE2EDuration="4.190287391s" podCreationTimestamp="2026-02-16 21:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:51.181596041 +0000 UTC m=+1109.000279356" watchObservedRunningTime="2026-02-16 21:14:51.190287391 +0000 UTC m=+1109.008970686" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.269576 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.284755 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/532c871c-9fef-4023-a49c-ef44566659ff-scripts\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.285000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/532c871c-9fef-4023-a49c-ef44566659ff-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.285108 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532c871c-9fef-4023-a49c-ef44566659ff-config\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.285214 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjckq\" (UniqueName: \"kubernetes.io/projected/532c871c-9fef-4023-a49c-ef44566659ff-kube-api-access-jjckq\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.285359 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532c871c-9fef-4023-a49c-ef44566659ff-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.285465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/532c871c-9fef-4023-a49c-ef44566659ff-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.285552 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/532c871c-9fef-4023-a49c-ef44566659ff-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.387116 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532c871c-9fef-4023-a49c-ef44566659ff-config\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.387230 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjckq\" (UniqueName: \"kubernetes.io/projected/532c871c-9fef-4023-a49c-ef44566659ff-kube-api-access-jjckq\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.387277 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532c871c-9fef-4023-a49c-ef44566659ff-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.387356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/532c871c-9fef-4023-a49c-ef44566659ff-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.387383 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/532c871c-9fef-4023-a49c-ef44566659ff-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.387479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/532c871c-9fef-4023-a49c-ef44566659ff-scripts\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.387528 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/532c871c-9fef-4023-a49c-ef44566659ff-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.390292 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/532c871c-9fef-4023-a49c-ef44566659ff-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.390313 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532c871c-9fef-4023-a49c-ef44566659ff-config\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.390884 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/532c871c-9fef-4023-a49c-ef44566659ff-scripts\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.393473 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532c871c-9fef-4023-a49c-ef44566659ff-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.393703 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/532c871c-9fef-4023-a49c-ef44566659ff-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.393984 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/532c871c-9fef-4023-a49c-ef44566659ff-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.409924 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjckq\" (UniqueName: \"kubernetes.io/projected/532c871c-9fef-4023-a49c-ef44566659ff-kube-api-access-jjckq\") pod \"ovn-northd-0\" (UID: \"532c871c-9fef-4023-a49c-ef44566659ff\") " pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.431642 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.546600 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 21:14:51 crc kubenswrapper[4805]: I0216 21:14:51.612211 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b6981b-31c5-4bde-a3a3-b76721a723a7" path="/var/lib/kubelet/pods/e1b6981b-31c5-4bde-a3a3-b76721a723a7/volumes" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.039679 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.098515 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kfgdf" event={"ID":"5eab1109-6fc8-446e-b797-fc5e11e18f5e","Type":"ContainerStarted","Data":"4a68fee97960fcc52b825c1dd5e18ca82895920228e7a5d63c663af0cd45cfcf"} Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.100131 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"532c871c-9fef-4023-a49c-ef44566659ff","Type":"ContainerStarted","Data":"e6c661f22511b3e209d7ea7883c43383b4b8af9965a51181640b69a9e1d867a6"} Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.552275 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db51-account-create-update-bxgf7"] Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.554984 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.558949 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.561207 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db51-account-create-update-bxgf7"] Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.615489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7gjk\" (UniqueName: \"kubernetes.io/projected/61f67549-f167-4252-9aa0-d19ab787ab6b-kube-api-access-f7gjk\") pod \"keystone-db51-account-create-update-bxgf7\" (UID: \"61f67549-f167-4252-9aa0-d19ab787ab6b\") " pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.615579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f67549-f167-4252-9aa0-d19ab787ab6b-operator-scripts\") pod \"keystone-db51-account-create-update-bxgf7\" (UID: \"61f67549-f167-4252-9aa0-d19ab787ab6b\") " pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.622124 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6jbxd"] Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.624614 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.632112 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6jbxd"] Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.720311 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqkrd\" (UniqueName: \"kubernetes.io/projected/18f63756-edb7-48fb-a2b0-0c911a9f7520-kube-api-access-tqkrd\") pod \"keystone-db-create-6jbxd\" (UID: \"18f63756-edb7-48fb-a2b0-0c911a9f7520\") " pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.720428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7gjk\" (UniqueName: \"kubernetes.io/projected/61f67549-f167-4252-9aa0-d19ab787ab6b-kube-api-access-f7gjk\") pod \"keystone-db51-account-create-update-bxgf7\" (UID: \"61f67549-f167-4252-9aa0-d19ab787ab6b\") " pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.720456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f67549-f167-4252-9aa0-d19ab787ab6b-operator-scripts\") pod \"keystone-db51-account-create-update-bxgf7\" (UID: \"61f67549-f167-4252-9aa0-d19ab787ab6b\") " pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.720487 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f63756-edb7-48fb-a2b0-0c911a9f7520-operator-scripts\") pod \"keystone-db-create-6jbxd\" (UID: \"18f63756-edb7-48fb-a2b0-0c911a9f7520\") " pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.721476 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f67549-f167-4252-9aa0-d19ab787ab6b-operator-scripts\") pod \"keystone-db51-account-create-update-bxgf7\" (UID: \"61f67549-f167-4252-9aa0-d19ab787ab6b\") " pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.722915 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9sptw"] Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.724142 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9sptw" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.733754 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9sptw"] Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.744312 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7gjk\" (UniqueName: \"kubernetes.io/projected/61f67549-f167-4252-9aa0-d19ab787ab6b-kube-api-access-f7gjk\") pod \"keystone-db51-account-create-update-bxgf7\" (UID: \"61f67549-f167-4252-9aa0-d19ab787ab6b\") " pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.753238 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-104e-account-create-update-g4bmr"] Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.754462 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.768337 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.791589 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-104e-account-create-update-g4bmr"] Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.827008 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a79f73-956b-4a3f-896a-ec53b38e84f4-operator-scripts\") pod \"placement-104e-account-create-update-g4bmr\" (UID: \"29a79f73-956b-4a3f-896a-ec53b38e84f4\") " pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.827059 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4bqd\" (UniqueName: \"kubernetes.io/projected/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-kube-api-access-q4bqd\") pod \"placement-db-create-9sptw\" (UID: \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\") " pod="openstack/placement-db-create-9sptw" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.827117 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f63756-edb7-48fb-a2b0-0c911a9f7520-operator-scripts\") pod \"keystone-db-create-6jbxd\" (UID: \"18f63756-edb7-48fb-a2b0-0c911a9f7520\") " pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.827157 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-operator-scripts\") pod \"placement-db-create-9sptw\" (UID: \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\") " pod="openstack/placement-db-create-9sptw" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.827185 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzls4\" (UniqueName: \"kubernetes.io/projected/29a79f73-956b-4a3f-896a-ec53b38e84f4-kube-api-access-tzls4\") pod \"placement-104e-account-create-update-g4bmr\" (UID: \"29a79f73-956b-4a3f-896a-ec53b38e84f4\") " pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.827233 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqkrd\" (UniqueName: \"kubernetes.io/projected/18f63756-edb7-48fb-a2b0-0c911a9f7520-kube-api-access-tqkrd\") pod \"keystone-db-create-6jbxd\" (UID: \"18f63756-edb7-48fb-a2b0-0c911a9f7520\") " pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.828180 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f63756-edb7-48fb-a2b0-0c911a9f7520-operator-scripts\") pod \"keystone-db-create-6jbxd\" (UID: \"18f63756-edb7-48fb-a2b0-0c911a9f7520\") " pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.850392 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqkrd\" (UniqueName: \"kubernetes.io/projected/18f63756-edb7-48fb-a2b0-0c911a9f7520-kube-api-access-tqkrd\") pod \"keystone-db-create-6jbxd\" (UID: \"18f63756-edb7-48fb-a2b0-0c911a9f7520\") " pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.872156 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.932074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a79f73-956b-4a3f-896a-ec53b38e84f4-operator-scripts\") pod \"placement-104e-account-create-update-g4bmr\" (UID: \"29a79f73-956b-4a3f-896a-ec53b38e84f4\") " pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.932121 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4bqd\" (UniqueName: \"kubernetes.io/projected/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-kube-api-access-q4bqd\") pod \"placement-db-create-9sptw\" (UID: \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\") " pod="openstack/placement-db-create-9sptw" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.932233 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-operator-scripts\") pod \"placement-db-create-9sptw\" (UID: \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\") " pod="openstack/placement-db-create-9sptw" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.932265 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzls4\" (UniqueName: \"kubernetes.io/projected/29a79f73-956b-4a3f-896a-ec53b38e84f4-kube-api-access-tzls4\") pod \"placement-104e-account-create-update-g4bmr\" (UID: \"29a79f73-956b-4a3f-896a-ec53b38e84f4\") " pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.932976 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a79f73-956b-4a3f-896a-ec53b38e84f4-operator-scripts\") pod \"placement-104e-account-create-update-g4bmr\" (UID: \"29a79f73-956b-4a3f-896a-ec53b38e84f4\") " pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.932994 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-operator-scripts\") pod \"placement-db-create-9sptw\" (UID: \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\") " pod="openstack/placement-db-create-9sptw" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.943242 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.961972 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4bqd\" (UniqueName: \"kubernetes.io/projected/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-kube-api-access-q4bqd\") pod \"placement-db-create-9sptw\" (UID: \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\") " pod="openstack/placement-db-create-9sptw" Feb 16 21:14:52 crc kubenswrapper[4805]: I0216 21:14:52.962805 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzls4\" (UniqueName: \"kubernetes.io/projected/29a79f73-956b-4a3f-896a-ec53b38e84f4-kube-api-access-tzls4\") pod \"placement-104e-account-create-update-g4bmr\" (UID: \"29a79f73-956b-4a3f-896a-ec53b38e84f4\") " pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.042934 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9sptw" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.111250 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.127309 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.141114 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-kfgdf" podStartSLOduration=5.141095407 podStartE2EDuration="5.141095407s" podCreationTimestamp="2026-02-16 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:53.130880514 +0000 UTC m=+1110.949563809" watchObservedRunningTime="2026-02-16 21:14:53.141095407 +0000 UTC m=+1110.959778702" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.487699 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6jbxd"] Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.495468 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db51-account-create-update-bxgf7"] Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.700027 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vms5f"] Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.735711 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.763206 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vms5f\" (UID: \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\") " pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.763451 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krkch\" (UniqueName: \"kubernetes.io/projected/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-kube-api-access-krkch\") pod \"mysqld-exporter-openstack-db-create-vms5f\" (UID: \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\") " pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.820713 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vms5f"] Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.851385 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9sptw"] Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.865548 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vms5f\" (UID: \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\") " pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.865638 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krkch\" (UniqueName: \"kubernetes.io/projected/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-kube-api-access-krkch\") pod \"mysqld-exporter-openstack-db-create-vms5f\" (UID: \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\") " pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.866266 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vms5f\" (UID: \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\") " pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.890181 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-104e-account-create-update-g4bmr"] Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.907502 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.916711 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krkch\" (UniqueName: \"kubernetes.io/projected/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-kube-api-access-krkch\") pod \"mysqld-exporter-openstack-db-create-vms5f\" (UID: \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\") " pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.941871 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ljkcf"] Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.942109 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" podUID="bd3976a1-6498-480d-a4d9-ebec8797c16d" containerName="dnsmasq-dns" containerID="cri-o://ce30475da0fb603766a1ce6568786a4eb5c0ffaf32798ee346cfbd77b873a995" gracePeriod=10 Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.981788 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lfl97"] Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.983675 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:53 crc kubenswrapper[4805]: I0216 21:14:53.992910 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lfl97"] Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.075985 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x5dg\" (UniqueName: \"kubernetes.io/projected/69ea890f-a85e-40d2-8722-71bcd489b1ec-kube-api-access-2x5dg\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.076046 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.076078 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.076285 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.076575 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-config\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.134353 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-7e8a-account-create-update-dzqhr"] Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.136213 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.139343 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.152842 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.169783 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-7e8a-account-create-update-dzqhr"] Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.178131 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qvnq\" (UniqueName: \"kubernetes.io/projected/6eb601c9-1da6-47be-b108-beb6a9cfbd03-kube-api-access-5qvnq\") pod \"mysqld-exporter-7e8a-account-create-update-dzqhr\" (UID: \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\") " pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.178338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.178455 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eb601c9-1da6-47be-b108-beb6a9cfbd03-operator-scripts\") pod \"mysqld-exporter-7e8a-account-create-update-dzqhr\" (UID: \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\") " pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.178558 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-config\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.178669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x5dg\" (UniqueName: \"kubernetes.io/projected/69ea890f-a85e-40d2-8722-71bcd489b1ec-kube-api-access-2x5dg\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.178791 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.178860 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.179990 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.180335 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.181741 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-config\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.181808 4805 generic.go:334] "Generic (PLEG): container finished" podID="bd3976a1-6498-480d-a4d9-ebec8797c16d" containerID="ce30475da0fb603766a1ce6568786a4eb5c0ffaf32798ee346cfbd77b873a995" exitCode=0 Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.181868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" event={"ID":"bd3976a1-6498-480d-a4d9-ebec8797c16d","Type":"ContainerDied","Data":"ce30475da0fb603766a1ce6568786a4eb5c0ffaf32798ee346cfbd77b873a995"} Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.185188 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.196052 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9sptw" event={"ID":"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f","Type":"ContainerStarted","Data":"61991e923cc7bcb734638657eb0e6c2f775790edf95b4a6dc2bfa42fc7d25c5d"} Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.211237 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x5dg\" (UniqueName: \"kubernetes.io/projected/69ea890f-a85e-40d2-8722-71bcd489b1ec-kube-api-access-2x5dg\") pod \"dnsmasq-dns-b8fbc5445-lfl97\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.238575 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6jbxd" event={"ID":"18f63756-edb7-48fb-a2b0-0c911a9f7520","Type":"ContainerStarted","Data":"2a63eeb39c588e0d516a7040aaa1b21936a5a8d6f120a1da6dad16012edb275b"} Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.241300 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-104e-account-create-update-g4bmr" event={"ID":"29a79f73-956b-4a3f-896a-ec53b38e84f4","Type":"ContainerStarted","Data":"f552030ae07544de2be793ed51a65f7a60fa4034f85c25ba3e727ee404107b0e"} Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.247533 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db51-account-create-update-bxgf7" event={"ID":"61f67549-f167-4252-9aa0-d19ab787ab6b","Type":"ContainerStarted","Data":"9c998371e5d8f34dbbe247dc43ab3d8b1133e1c6869486610198cd37063f86d5"} Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.283367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qvnq\" (UniqueName: \"kubernetes.io/projected/6eb601c9-1da6-47be-b108-beb6a9cfbd03-kube-api-access-5qvnq\") pod \"mysqld-exporter-7e8a-account-create-update-dzqhr\" (UID: \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\") " pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.283470 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eb601c9-1da6-47be-b108-beb6a9cfbd03-operator-scripts\") pod \"mysqld-exporter-7e8a-account-create-update-dzqhr\" (UID: \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\") " pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.285401 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eb601c9-1da6-47be-b108-beb6a9cfbd03-operator-scripts\") pod \"mysqld-exporter-7e8a-account-create-update-dzqhr\" (UID: \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\") " pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.300603 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qvnq\" (UniqueName: \"kubernetes.io/projected/6eb601c9-1da6-47be-b108-beb6a9cfbd03-kube-api-access-5qvnq\") pod \"mysqld-exporter-7e8a-account-create-update-dzqhr\" (UID: \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\") " pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.315693 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.491550 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.795949 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vms5f"] Feb 16 21:14:54 crc kubenswrapper[4805]: W0216 21:14:54.805250 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e1ca094_bee0_4e7f_a0e6_f3e9f6cb0dce.slice/crio-462ea3c23405875ff3837209f244d6f64765b6267e4c6e3f91f9f9a1ab4c2911 WatchSource:0}: Error finding container 462ea3c23405875ff3837209f244d6f64765b6267e4c6e3f91f9f9a1ab4c2911: Status 404 returned error can't find the container with id 462ea3c23405875ff3837209f244d6f64765b6267e4c6e3f91f9f9a1ab4c2911 Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.868996 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.909079 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-ovsdbserver-nb\") pod \"bd3976a1-6498-480d-a4d9-ebec8797c16d\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.909222 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-config\") pod \"bd3976a1-6498-480d-a4d9-ebec8797c16d\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.909256 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-dns-svc\") pod \"bd3976a1-6498-480d-a4d9-ebec8797c16d\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.909437 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp4bc\" (UniqueName: \"kubernetes.io/projected/bd3976a1-6498-480d-a4d9-ebec8797c16d-kube-api-access-sp4bc\") pod \"bd3976a1-6498-480d-a4d9-ebec8797c16d\" (UID: \"bd3976a1-6498-480d-a4d9-ebec8797c16d\") " Feb 16 21:14:54 crc kubenswrapper[4805]: I0216 21:14:54.917263 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3976a1-6498-480d-a4d9-ebec8797c16d-kube-api-access-sp4bc" (OuterVolumeSpecName: "kube-api-access-sp4bc") pod "bd3976a1-6498-480d-a4d9-ebec8797c16d" (UID: "bd3976a1-6498-480d-a4d9-ebec8797c16d"). InnerVolumeSpecName "kube-api-access-sp4bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.013744 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp4bc\" (UniqueName: \"kubernetes.io/projected/bd3976a1-6498-480d-a4d9-ebec8797c16d-kube-api-access-sp4bc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.046500 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd3976a1-6498-480d-a4d9-ebec8797c16d" (UID: "bd3976a1-6498-480d-a4d9-ebec8797c16d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.047965 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd3976a1-6498-480d-a4d9-ebec8797c16d" (UID: "bd3976a1-6498-480d-a4d9-ebec8797c16d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.052899 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lfl97"] Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.068331 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-config" (OuterVolumeSpecName: "config") pod "bd3976a1-6498-480d-a4d9-ebec8797c16d" (UID: "bd3976a1-6498-480d-a4d9-ebec8797c16d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.094320 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:14:55 crc kubenswrapper[4805]: E0216 21:14:55.095045 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3976a1-6498-480d-a4d9-ebec8797c16d" containerName="dnsmasq-dns" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.095060 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3976a1-6498-480d-a4d9-ebec8797c16d" containerName="dnsmasq-dns" Feb 16 21:14:55 crc kubenswrapper[4805]: E0216 21:14:55.095101 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3976a1-6498-480d-a4d9-ebec8797c16d" containerName="init" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.095107 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3976a1-6498-480d-a4d9-ebec8797c16d" containerName="init" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.095768 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3976a1-6498-480d-a4d9-ebec8797c16d" containerName="dnsmasq-dns" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.103866 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.108161 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.108306 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4ghjh" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.108358 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.109065 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116311 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zdmr\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-kube-api-access-4zdmr\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116565 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-lock\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-10e66d3b-1529-4a4b-908c-38e769d3641c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e66d3b-1529-4a4b-908c-38e769d3641c\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116661 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-cache\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116870 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116885 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116894 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3976a1-6498-480d-a4d9-ebec8797c16d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.116968 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.218624 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-cache\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.218694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.218710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zdmr\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-kube-api-access-4zdmr\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.218758 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.218811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-lock\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.218842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-10e66d3b-1529-4a4b-908c-38e769d3641c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e66d3b-1529-4a4b-908c-38e769d3641c\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: E0216 21:14:55.219252 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:14:55 crc kubenswrapper[4805]: E0216 21:14:55.219265 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.219264 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-cache\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: E0216 21:14:55.219303 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift podName:b51bad1e-99c6-4e2b-ae2b-c7e338ef235e nodeName:}" failed. No retries permitted until 2026-02-16 21:14:55.71929129 +0000 UTC m=+1113.537974585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift") pod "swift-storage-0" (UID: "b51bad1e-99c6-4e2b-ae2b-c7e338ef235e") : configmap "swift-ring-files" not found Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.219537 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-lock\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.224497 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.224539 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-10e66d3b-1529-4a4b-908c-38e769d3641c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e66d3b-1529-4a4b-908c-38e769d3641c\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f438a31a5d11fc1445c67adf76bf314fe38e115ef57adefeebb33821830acfa/globalmount\"" pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.227031 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.244110 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zdmr\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-kube-api-access-4zdmr\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.260524 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" event={"ID":"69ea890f-a85e-40d2-8722-71bcd489b1ec","Type":"ContainerStarted","Data":"7458b66d1988adda4cecc8119c3b959cae64bfb9fafb922f80ef6ea2b34c1f96"} Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.262339 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vms5f" event={"ID":"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce","Type":"ContainerStarted","Data":"223456c77052e995a9836c6e84ad8d475762c56d82a2f5726cce53ed2ac54761"} Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.262365 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vms5f" event={"ID":"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce","Type":"ContainerStarted","Data":"462ea3c23405875ff3837209f244d6f64765b6267e4c6e3f91f9f9a1ab4c2911"} Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.265285 4805 generic.go:334] "Generic (PLEG): container finished" podID="2ea45f0e-b56b-42e5-a7e3-c30894c51f9f" containerID="06a4930dc414e0244111f902d77e93c78a0259a4190871ed8b8dc36f05befaab" exitCode=0 Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.265336 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9sptw" event={"ID":"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f","Type":"ContainerDied","Data":"06a4930dc414e0244111f902d77e93c78a0259a4190871ed8b8dc36f05befaab"} Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.266585 4805 generic.go:334] "Generic (PLEG): container finished" podID="18f63756-edb7-48fb-a2b0-0c911a9f7520" containerID="331a908851145aaf48f79e577a393098f06206e948ce58f98715c42339a9a002" exitCode=0 Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.266629 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6jbxd" event={"ID":"18f63756-edb7-48fb-a2b0-0c911a9f7520","Type":"ContainerDied","Data":"331a908851145aaf48f79e577a393098f06206e948ce58f98715c42339a9a002"} Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.268067 4805 generic.go:334] "Generic (PLEG): container finished" podID="29a79f73-956b-4a3f-896a-ec53b38e84f4" containerID="24687ce1f74c71e60bf705a2e1013130230352b99b41c099a89ab612b3550228" exitCode=0 Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.268141 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-104e-account-create-update-g4bmr" event={"ID":"29a79f73-956b-4a3f-896a-ec53b38e84f4","Type":"ContainerDied","Data":"24687ce1f74c71e60bf705a2e1013130230352b99b41c099a89ab612b3550228"} Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.277422 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db51-account-create-update-bxgf7" event={"ID":"61f67549-f167-4252-9aa0-d19ab787ab6b","Type":"ContainerDied","Data":"8231add64d6c1f35e48bc1951f0d37635a7b8cdb8c3bf3152d1e6a145284b077"} Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.277380 4805 generic.go:334] "Generic (PLEG): container finished" podID="61f67549-f167-4252-9aa0-d19ab787ab6b" containerID="8231add64d6c1f35e48bc1951f0d37635a7b8cdb8c3bf3152d1e6a145284b077" exitCode=0 Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.279929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"532c871c-9fef-4023-a49c-ef44566659ff","Type":"ContainerStarted","Data":"ad990047f739d2326a0d61e07f82f8f620abfe7b7093c980095b3ee077d3a6d7"} Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.296303 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-vms5f" podStartSLOduration=2.2962735260000002 podStartE2EDuration="2.296273526s" podCreationTimestamp="2026-02-16 21:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:55.277302881 +0000 UTC m=+1113.095986176" watchObservedRunningTime="2026-02-16 21:14:55.296273526 +0000 UTC m=+1113.114956821" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.300435 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" event={"ID":"bd3976a1-6498-480d-a4d9-ebec8797c16d","Type":"ContainerDied","Data":"40370c17ecc39239ad7c9f6ae3b03cde59dc4247663c3f3281ae5b786cd23f9c"} Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.300503 4805 scope.go:117] "RemoveContainer" containerID="ce30475da0fb603766a1ce6568786a4eb5c0ffaf32798ee346cfbd77b873a995" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.300709 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.324035 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-10e66d3b-1529-4a4b-908c-38e769d3641c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e66d3b-1529-4a4b-908c-38e769d3641c\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: W0216 21:14:55.376510 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eb601c9_1da6_47be_b108_beb6a9cfbd03.slice/crio-d6c2657761c1d93ebc091f7b0697939ba214994ddea5930134e6c2a587cd4e14 WatchSource:0}: Error finding container d6c2657761c1d93ebc091f7b0697939ba214994ddea5930134e6c2a587cd4e14: Status 404 returned error can't find the container with id d6c2657761c1d93ebc091f7b0697939ba214994ddea5930134e6c2a587cd4e14 Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.388839 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-7e8a-account-create-update-dzqhr"] Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.697182 4805 scope.go:117] "RemoveContainer" containerID="563d9ca381cb940c006191940e1ae5f776a85cab6812153950161babef77100c" Feb 16 21:14:55 crc kubenswrapper[4805]: I0216 21:14:55.739900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:55 crc kubenswrapper[4805]: E0216 21:14:55.740092 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:14:55 crc kubenswrapper[4805]: E0216 21:14:55.740120 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:14:55 crc kubenswrapper[4805]: E0216 21:14:55.740173 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift podName:b51bad1e-99c6-4e2b-ae2b-c7e338ef235e nodeName:}" failed. No retries permitted until 2026-02-16 21:14:56.740155422 +0000 UTC m=+1114.558838717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift") pod "swift-storage-0" (UID: "b51bad1e-99c6-4e2b-ae2b-c7e338ef235e") : configmap "swift-ring-files" not found Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.326580 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"532c871c-9fef-4023-a49c-ef44566659ff","Type":"ContainerStarted","Data":"87a39697dba0784811265573b55b88a159f391c43798ce0b6175c50a6cdfd35f"} Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.327135 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.332929 4805 generic.go:334] "Generic (PLEG): container finished" podID="6eb601c9-1da6-47be-b108-beb6a9cfbd03" containerID="5bec55d553cf84066046a8e2441ec508d1a634c388468a393a2550eb567a8675" exitCode=0 Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.333008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" event={"ID":"6eb601c9-1da6-47be-b108-beb6a9cfbd03","Type":"ContainerDied","Data":"5bec55d553cf84066046a8e2441ec508d1a634c388468a393a2550eb567a8675"} Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.333076 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" event={"ID":"6eb601c9-1da6-47be-b108-beb6a9cfbd03","Type":"ContainerStarted","Data":"d6c2657761c1d93ebc091f7b0697939ba214994ddea5930134e6c2a587cd4e14"} Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.341491 4805 generic.go:334] "Generic (PLEG): container finished" podID="69ea890f-a85e-40d2-8722-71bcd489b1ec" containerID="1af8333abc1338aa367ebc5da7467323ad21953991e3cabf0e3664aaa7126be8" exitCode=0 Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.341570 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" event={"ID":"69ea890f-a85e-40d2-8722-71bcd489b1ec","Type":"ContainerDied","Data":"1af8333abc1338aa367ebc5da7467323ad21953991e3cabf0e3664aaa7126be8"} Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.347282 4805 generic.go:334] "Generic (PLEG): container finished" podID="3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce" containerID="223456c77052e995a9836c6e84ad8d475762c56d82a2f5726cce53ed2ac54761" exitCode=0 Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.347857 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vms5f" event={"ID":"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce","Type":"ContainerDied","Data":"223456c77052e995a9836c6e84ad8d475762c56d82a2f5726cce53ed2ac54761"} Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.394769 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.5908843790000002 podStartE2EDuration="5.394749007s" podCreationTimestamp="2026-02-16 21:14:51 +0000 UTC" firstStartedPulling="2026-02-16 21:14:52.045015322 +0000 UTC m=+1109.863698617" lastFinishedPulling="2026-02-16 21:14:53.84887995 +0000 UTC m=+1111.667563245" observedRunningTime="2026-02-16 21:14:56.374352404 +0000 UTC m=+1114.193035739" watchObservedRunningTime="2026-02-16 21:14:56.394749007 +0000 UTC m=+1114.213432312" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.541002 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-g867x"] Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.546971 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g867x" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.568766 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g867x"] Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.631019 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d910-account-create-update-trmdz"] Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.632262 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.636434 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.657436 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d910-account-create-update-trmdz"] Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.662382 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn4tg\" (UniqueName: \"kubernetes.io/projected/52022a46-b370-413d-be8d-de7c5d3ed7af-kube-api-access-jn4tg\") pod \"glance-db-create-g867x\" (UID: \"52022a46-b370-413d-be8d-de7c5d3ed7af\") " pod="openstack/glance-db-create-g867x" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.662448 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52022a46-b370-413d-be8d-de7c5d3ed7af-operator-scripts\") pod \"glance-db-create-g867x\" (UID: \"52022a46-b370-413d-be8d-de7c5d3ed7af\") " pod="openstack/glance-db-create-g867x" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.764701 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn4tg\" (UniqueName: \"kubernetes.io/projected/52022a46-b370-413d-be8d-de7c5d3ed7af-kube-api-access-jn4tg\") pod \"glance-db-create-g867x\" (UID: \"52022a46-b370-413d-be8d-de7c5d3ed7af\") " pod="openstack/glance-db-create-g867x" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.764784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52022a46-b370-413d-be8d-de7c5d3ed7af-operator-scripts\") pod \"glance-db-create-g867x\" (UID: \"52022a46-b370-413d-be8d-de7c5d3ed7af\") " pod="openstack/glance-db-create-g867x" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.764933 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adc7e606-c1b5-4a97-bca8-21866460d586-operator-scripts\") pod \"glance-d910-account-create-update-trmdz\" (UID: \"adc7e606-c1b5-4a97-bca8-21866460d586\") " pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.765169 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87gx\" (UniqueName: \"kubernetes.io/projected/adc7e606-c1b5-4a97-bca8-21866460d586-kube-api-access-v87gx\") pod \"glance-d910-account-create-update-trmdz\" (UID: \"adc7e606-c1b5-4a97-bca8-21866460d586\") " pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.765315 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:56 crc kubenswrapper[4805]: E0216 21:14:56.766143 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:14:56 crc kubenswrapper[4805]: E0216 21:14:56.766166 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:14:56 crc kubenswrapper[4805]: E0216 21:14:56.766208 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift podName:b51bad1e-99c6-4e2b-ae2b-c7e338ef235e nodeName:}" failed. No retries permitted until 2026-02-16 21:14:58.766192253 +0000 UTC m=+1116.584875548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift") pod "swift-storage-0" (UID: "b51bad1e-99c6-4e2b-ae2b-c7e338ef235e") : configmap "swift-ring-files" not found Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.772569 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52022a46-b370-413d-be8d-de7c5d3ed7af-operator-scripts\") pod \"glance-db-create-g867x\" (UID: \"52022a46-b370-413d-be8d-de7c5d3ed7af\") " pod="openstack/glance-db-create-g867x" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.791978 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn4tg\" (UniqueName: \"kubernetes.io/projected/52022a46-b370-413d-be8d-de7c5d3ed7af-kube-api-access-jn4tg\") pod \"glance-db-create-g867x\" (UID: \"52022a46-b370-413d-be8d-de7c5d3ed7af\") " pod="openstack/glance-db-create-g867x" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.867360 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adc7e606-c1b5-4a97-bca8-21866460d586-operator-scripts\") pod \"glance-d910-account-create-update-trmdz\" (UID: \"adc7e606-c1b5-4a97-bca8-21866460d586\") " pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.867449 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v87gx\" (UniqueName: \"kubernetes.io/projected/adc7e606-c1b5-4a97-bca8-21866460d586-kube-api-access-v87gx\") pod \"glance-d910-account-create-update-trmdz\" (UID: \"adc7e606-c1b5-4a97-bca8-21866460d586\") " pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.868517 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adc7e606-c1b5-4a97-bca8-21866460d586-operator-scripts\") pod \"glance-d910-account-create-update-trmdz\" (UID: \"adc7e606-c1b5-4a97-bca8-21866460d586\") " pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.883816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v87gx\" (UniqueName: \"kubernetes.io/projected/adc7e606-c1b5-4a97-bca8-21866460d586-kube-api-access-v87gx\") pod \"glance-d910-account-create-update-trmdz\" (UID: \"adc7e606-c1b5-4a97-bca8-21866460d586\") " pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.895044 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g867x" Feb 16 21:14:56 crc kubenswrapper[4805]: I0216 21:14:56.966924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.025367 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.172015 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzls4\" (UniqueName: \"kubernetes.io/projected/29a79f73-956b-4a3f-896a-ec53b38e84f4-kube-api-access-tzls4\") pod \"29a79f73-956b-4a3f-896a-ec53b38e84f4\" (UID: \"29a79f73-956b-4a3f-896a-ec53b38e84f4\") " Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.172168 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a79f73-956b-4a3f-896a-ec53b38e84f4-operator-scripts\") pod \"29a79f73-956b-4a3f-896a-ec53b38e84f4\" (UID: \"29a79f73-956b-4a3f-896a-ec53b38e84f4\") " Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.174959 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a79f73-956b-4a3f-896a-ec53b38e84f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29a79f73-956b-4a3f-896a-ec53b38e84f4" (UID: "29a79f73-956b-4a3f-896a-ec53b38e84f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.180254 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a79f73-956b-4a3f-896a-ec53b38e84f4-kube-api-access-tzls4" (OuterVolumeSpecName: "kube-api-access-tzls4") pod "29a79f73-956b-4a3f-896a-ec53b38e84f4" (UID: "29a79f73-956b-4a3f-896a-ec53b38e84f4"). InnerVolumeSpecName "kube-api-access-tzls4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.274175 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzls4\" (UniqueName: \"kubernetes.io/projected/29a79f73-956b-4a3f-896a-ec53b38e84f4-kube-api-access-tzls4\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.274233 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a79f73-956b-4a3f-896a-ec53b38e84f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.310336 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.317526 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9sptw" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.323957 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.377442 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-104e-account-create-update-g4bmr" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.377843 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-104e-account-create-update-g4bmr" event={"ID":"29a79f73-956b-4a3f-896a-ec53b38e84f4","Type":"ContainerDied","Data":"f552030ae07544de2be793ed51a65f7a60fa4034f85c25ba3e727ee404107b0e"} Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.378943 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f552030ae07544de2be793ed51a65f7a60fa4034f85c25ba3e727ee404107b0e" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.382790 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db51-account-create-update-bxgf7" event={"ID":"61f67549-f167-4252-9aa0-d19ab787ab6b","Type":"ContainerDied","Data":"9c998371e5d8f34dbbe247dc43ab3d8b1133e1c6869486610198cd37063f86d5"} Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.382900 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c998371e5d8f34dbbe247dc43ab3d8b1133e1c6869486610198cd37063f86d5" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.382992 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db51-account-create-update-bxgf7" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.391371 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" event={"ID":"69ea890f-a85e-40d2-8722-71bcd489b1ec","Type":"ContainerStarted","Data":"e900fe6df134219de0ae70ee025fa622baf07d7f21b83139a4369c9fd946a11c"} Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.393693 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.395511 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9sptw" event={"ID":"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f","Type":"ContainerDied","Data":"61991e923cc7bcb734638657eb0e6c2f775790edf95b4a6dc2bfa42fc7d25c5d"} Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.395556 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61991e923cc7bcb734638657eb0e6c2f775790edf95b4a6dc2bfa42fc7d25c5d" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.395753 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9sptw" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.400741 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6jbxd" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.400807 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6jbxd" event={"ID":"18f63756-edb7-48fb-a2b0-0c911a9f7520","Type":"ContainerDied","Data":"2a63eeb39c588e0d516a7040aaa1b21936a5a8d6f120a1da6dad16012edb275b"} Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.400838 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a63eeb39c588e0d516a7040aaa1b21936a5a8d6f120a1da6dad16012edb275b" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.423272 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" podStartSLOduration=4.423221485 podStartE2EDuration="4.423221485s" podCreationTimestamp="2026-02-16 21:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:14:57.415864322 +0000 UTC m=+1115.234547617" watchObservedRunningTime="2026-02-16 21:14:57.423221485 +0000 UTC m=+1115.241904800" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.483347 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqkrd\" (UniqueName: \"kubernetes.io/projected/18f63756-edb7-48fb-a2b0-0c911a9f7520-kube-api-access-tqkrd\") pod \"18f63756-edb7-48fb-a2b0-0c911a9f7520\" (UID: \"18f63756-edb7-48fb-a2b0-0c911a9f7520\") " Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.483492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-operator-scripts\") pod \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\" (UID: \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\") " Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.483519 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4bqd\" (UniqueName: \"kubernetes.io/projected/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-kube-api-access-q4bqd\") pod \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\" (UID: \"2ea45f0e-b56b-42e5-a7e3-c30894c51f9f\") " Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.483587 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7gjk\" (UniqueName: \"kubernetes.io/projected/61f67549-f167-4252-9aa0-d19ab787ab6b-kube-api-access-f7gjk\") pod \"61f67549-f167-4252-9aa0-d19ab787ab6b\" (UID: \"61f67549-f167-4252-9aa0-d19ab787ab6b\") " Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.483696 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f63756-edb7-48fb-a2b0-0c911a9f7520-operator-scripts\") pod \"18f63756-edb7-48fb-a2b0-0c911a9f7520\" (UID: \"18f63756-edb7-48fb-a2b0-0c911a9f7520\") " Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.483746 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f67549-f167-4252-9aa0-d19ab787ab6b-operator-scripts\") pod \"61f67549-f167-4252-9aa0-d19ab787ab6b\" (UID: \"61f67549-f167-4252-9aa0-d19ab787ab6b\") " Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.486077 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ea45f0e-b56b-42e5-a7e3-c30894c51f9f" (UID: "2ea45f0e-b56b-42e5-a7e3-c30894c51f9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.486085 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f67549-f167-4252-9aa0-d19ab787ab6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61f67549-f167-4252-9aa0-d19ab787ab6b" (UID: "61f67549-f167-4252-9aa0-d19ab787ab6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.486594 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f63756-edb7-48fb-a2b0-0c911a9f7520-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18f63756-edb7-48fb-a2b0-0c911a9f7520" (UID: "18f63756-edb7-48fb-a2b0-0c911a9f7520"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.489240 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-kube-api-access-q4bqd" (OuterVolumeSpecName: "kube-api-access-q4bqd") pod "2ea45f0e-b56b-42e5-a7e3-c30894c51f9f" (UID: "2ea45f0e-b56b-42e5-a7e3-c30894c51f9f"). InnerVolumeSpecName "kube-api-access-q4bqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.502220 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f67549-f167-4252-9aa0-d19ab787ab6b-kube-api-access-f7gjk" (OuterVolumeSpecName: "kube-api-access-f7gjk") pod "61f67549-f167-4252-9aa0-d19ab787ab6b" (UID: "61f67549-f167-4252-9aa0-d19ab787ab6b"). InnerVolumeSpecName "kube-api-access-f7gjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.512031 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f63756-edb7-48fb-a2b0-0c911a9f7520-kube-api-access-tqkrd" (OuterVolumeSpecName: "kube-api-access-tqkrd") pod "18f63756-edb7-48fb-a2b0-0c911a9f7520" (UID: "18f63756-edb7-48fb-a2b0-0c911a9f7520"). InnerVolumeSpecName "kube-api-access-tqkrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.585698 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7gjk\" (UniqueName: \"kubernetes.io/projected/61f67549-f167-4252-9aa0-d19ab787ab6b-kube-api-access-f7gjk\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.585741 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f63756-edb7-48fb-a2b0-0c911a9f7520-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.585753 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61f67549-f167-4252-9aa0-d19ab787ab6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.585764 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqkrd\" (UniqueName: \"kubernetes.io/projected/18f63756-edb7-48fb-a2b0-0c911a9f7520-kube-api-access-tqkrd\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.585776 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.585788 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4bqd\" (UniqueName: \"kubernetes.io/projected/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f-kube-api-access-q4bqd\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:57 crc kubenswrapper[4805]: I0216 21:14:57.948941 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.008522 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:58 crc kubenswrapper[4805]: W0216 21:14:58.061507 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52022a46_b370_413d_be8d_de7c5d3ed7af.slice/crio-f23a74b8c2d6a41ee1712a882b0eb685640824ea2074b7b181c84a4dec97ea1b WatchSource:0}: Error finding container f23a74b8c2d6a41ee1712a882b0eb685640824ea2074b7b181c84a4dec97ea1b: Status 404 returned error can't find the container with id f23a74b8c2d6a41ee1712a882b0eb685640824ea2074b7b181c84a4dec97ea1b Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.072930 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g867x"] Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.091404 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d910-account-create-update-trmdz"] Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.110205 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krkch\" (UniqueName: \"kubernetes.io/projected/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-kube-api-access-krkch\") pod \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\" (UID: \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\") " Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.110370 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eb601c9-1da6-47be-b108-beb6a9cfbd03-operator-scripts\") pod \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\" (UID: \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\") " Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.110401 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-operator-scripts\") pod \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\" (UID: \"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce\") " Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.110431 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qvnq\" (UniqueName: \"kubernetes.io/projected/6eb601c9-1da6-47be-b108-beb6a9cfbd03-kube-api-access-5qvnq\") pod \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\" (UID: \"6eb601c9-1da6-47be-b108-beb6a9cfbd03\") " Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.110977 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eb601c9-1da6-47be-b108-beb6a9cfbd03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6eb601c9-1da6-47be-b108-beb6a9cfbd03" (UID: "6eb601c9-1da6-47be-b108-beb6a9cfbd03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.112298 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce" (UID: "3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.117996 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-kube-api-access-krkch" (OuterVolumeSpecName: "kube-api-access-krkch") pod "3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce" (UID: "3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce"). InnerVolumeSpecName "kube-api-access-krkch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.118331 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb601c9-1da6-47be-b108-beb6a9cfbd03-kube-api-access-5qvnq" (OuterVolumeSpecName: "kube-api-access-5qvnq") pod "6eb601c9-1da6-47be-b108-beb6a9cfbd03" (UID: "6eb601c9-1da6-47be-b108-beb6a9cfbd03"). InnerVolumeSpecName "kube-api-access-5qvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.212183 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krkch\" (UniqueName: \"kubernetes.io/projected/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-kube-api-access-krkch\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.212217 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eb601c9-1da6-47be-b108-beb6a9cfbd03-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.212226 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.212234 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qvnq\" (UniqueName: \"kubernetes.io/projected/6eb601c9-1da6-47be-b108-beb6a9cfbd03-kube-api-access-5qvnq\") on node \"crc\" DevicePath \"\"" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268007 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gnm4t"] Feb 16 21:14:58 crc kubenswrapper[4805]: E0216 21:14:58.268536 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea45f0e-b56b-42e5-a7e3-c30894c51f9f" containerName="mariadb-database-create" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268558 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea45f0e-b56b-42e5-a7e3-c30894c51f9f" containerName="mariadb-database-create" Feb 16 21:14:58 crc kubenswrapper[4805]: E0216 21:14:58.268579 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f67549-f167-4252-9aa0-d19ab787ab6b" containerName="mariadb-account-create-update" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268586 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f67549-f167-4252-9aa0-d19ab787ab6b" containerName="mariadb-account-create-update" Feb 16 21:14:58 crc kubenswrapper[4805]: E0216 21:14:58.268599 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f63756-edb7-48fb-a2b0-0c911a9f7520" containerName="mariadb-database-create" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268606 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f63756-edb7-48fb-a2b0-0c911a9f7520" containerName="mariadb-database-create" Feb 16 21:14:58 crc kubenswrapper[4805]: E0216 21:14:58.268617 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a79f73-956b-4a3f-896a-ec53b38e84f4" containerName="mariadb-account-create-update" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268623 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a79f73-956b-4a3f-896a-ec53b38e84f4" containerName="mariadb-account-create-update" Feb 16 21:14:58 crc kubenswrapper[4805]: E0216 21:14:58.268634 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb601c9-1da6-47be-b108-beb6a9cfbd03" containerName="mariadb-account-create-update" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268640 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb601c9-1da6-47be-b108-beb6a9cfbd03" containerName="mariadb-account-create-update" Feb 16 21:14:58 crc kubenswrapper[4805]: E0216 21:14:58.268653 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce" containerName="mariadb-database-create" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268659 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce" containerName="mariadb-database-create" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268940 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce" containerName="mariadb-database-create" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268960 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a79f73-956b-4a3f-896a-ec53b38e84f4" containerName="mariadb-account-create-update" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268966 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea45f0e-b56b-42e5-a7e3-c30894c51f9f" containerName="mariadb-database-create" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268981 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f67549-f167-4252-9aa0-d19ab787ab6b" containerName="mariadb-account-create-update" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.268988 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f63756-edb7-48fb-a2b0-0c911a9f7520" containerName="mariadb-database-create" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.269006 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb601c9-1da6-47be-b108-beb6a9cfbd03" containerName="mariadb-account-create-update" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.269712 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gnm4t" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.271517 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.280283 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gnm4t"] Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.413604 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.413630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7e8a-account-create-update-dzqhr" event={"ID":"6eb601c9-1da6-47be-b108-beb6a9cfbd03","Type":"ContainerDied","Data":"d6c2657761c1d93ebc091f7b0697939ba214994ddea5930134e6c2a587cd4e14"} Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.413669 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c2657761c1d93ebc091f7b0697939ba214994ddea5930134e6c2a587cd4e14" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.416389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lctf\" (UniqueName: \"kubernetes.io/projected/2a65d522-8682-4311-9e3a-d6575370411b-kube-api-access-2lctf\") pod \"root-account-create-update-gnm4t\" (UID: \"2a65d522-8682-4311-9e3a-d6575370411b\") " pod="openstack/root-account-create-update-gnm4t" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.416529 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a65d522-8682-4311-9e3a-d6575370411b-operator-scripts\") pod \"root-account-create-update-gnm4t\" (UID: \"2a65d522-8682-4311-9e3a-d6575370411b\") " pod="openstack/root-account-create-update-gnm4t" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.418874 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vms5f" event={"ID":"3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce","Type":"ContainerDied","Data":"462ea3c23405875ff3837209f244d6f64765b6267e4c6e3f91f9f9a1ab4c2911"} Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.418907 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="462ea3c23405875ff3837209f244d6f64765b6267e4c6e3f91f9f9a1ab4c2911" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.418945 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vms5f" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.420224 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g867x" event={"ID":"52022a46-b370-413d-be8d-de7c5d3ed7af","Type":"ContainerStarted","Data":"f23a74b8c2d6a41ee1712a882b0eb685640824ea2074b7b181c84a4dec97ea1b"} Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.421835 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d910-account-create-update-trmdz" event={"ID":"adc7e606-c1b5-4a97-bca8-21866460d586","Type":"ContainerStarted","Data":"ecf78b5e8c83ee96ae0b81e89b92c095276650f60d03a0f9c38c42d3b5601631"} Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.527882 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lctf\" (UniqueName: \"kubernetes.io/projected/2a65d522-8682-4311-9e3a-d6575370411b-kube-api-access-2lctf\") pod \"root-account-create-update-gnm4t\" (UID: \"2a65d522-8682-4311-9e3a-d6575370411b\") " pod="openstack/root-account-create-update-gnm4t" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.528443 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a65d522-8682-4311-9e3a-d6575370411b-operator-scripts\") pod \"root-account-create-update-gnm4t\" (UID: \"2a65d522-8682-4311-9e3a-d6575370411b\") " pod="openstack/root-account-create-update-gnm4t" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.529183 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a65d522-8682-4311-9e3a-d6575370411b-operator-scripts\") pod \"root-account-create-update-gnm4t\" (UID: \"2a65d522-8682-4311-9e3a-d6575370411b\") " pod="openstack/root-account-create-update-gnm4t" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.553851 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lctf\" (UniqueName: \"kubernetes.io/projected/2a65d522-8682-4311-9e3a-d6575370411b-kube-api-access-2lctf\") pod \"root-account-create-update-gnm4t\" (UID: \"2a65d522-8682-4311-9e3a-d6575370411b\") " pod="openstack/root-account-create-update-gnm4t" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.589413 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.673890 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gnm4t" Feb 16 21:14:58 crc kubenswrapper[4805]: I0216 21:14:58.835587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:14:58 crc kubenswrapper[4805]: E0216 21:14:58.835971 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:14:58 crc kubenswrapper[4805]: E0216 21:14:58.835992 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:14:58 crc kubenswrapper[4805]: E0216 21:14:58.836037 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift podName:b51bad1e-99c6-4e2b-ae2b-c7e338ef235e nodeName:}" failed. No retries permitted until 2026-02-16 21:15:02.836020915 +0000 UTC m=+1120.654704200 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift") pod "swift-storage-0" (UID: "b51bad1e-99c6-4e2b-ae2b-c7e338ef235e") : configmap "swift-ring-files" not found Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.016061 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-tgvc9"] Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.017653 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.022255 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.023310 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.023362 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.040147 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qcc\" (UniqueName: \"kubernetes.io/projected/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-kube-api-access-n9qcc\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.040182 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-combined-ca-bundle\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.040204 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-swiftconf\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.040269 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-scripts\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.040290 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-etc-swift\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.040343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-dispersionconf\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.040366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-ring-data-devices\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.045674 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-tgvc9"] Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.137357 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gnm4t"] Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.141536 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-scripts\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.141576 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-etc-swift\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.141632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-dispersionconf\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.141657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-ring-data-devices\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.141744 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qcc\" (UniqueName: \"kubernetes.io/projected/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-kube-api-access-n9qcc\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.141763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-combined-ca-bundle\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.141795 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-swiftconf\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.142100 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-etc-swift\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.142503 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-ring-data-devices\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.142395 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-scripts\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: W0216 21:14:59.144441 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a65d522_8682_4311_9e3a_d6575370411b.slice/crio-c88af98bd8a75a4466dd921e321962b51335bba5f9441c5081abe78d471295e1 WatchSource:0}: Error finding container c88af98bd8a75a4466dd921e321962b51335bba5f9441c5081abe78d471295e1: Status 404 returned error can't find the container with id c88af98bd8a75a4466dd921e321962b51335bba5f9441c5081abe78d471295e1 Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.147626 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-dispersionconf\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.147925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-combined-ca-bundle\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.148211 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-swiftconf\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.162607 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qcc\" (UniqueName: \"kubernetes.io/projected/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-kube-api-access-n9qcc\") pod \"swift-ring-rebalance-tgvc9\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.345758 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.433274 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gnm4t" event={"ID":"2a65d522-8682-4311-9e3a-d6575370411b","Type":"ContainerStarted","Data":"c88af98bd8a75a4466dd921e321962b51335bba5f9441c5081abe78d471295e1"} Feb 16 21:14:59 crc kubenswrapper[4805]: I0216 21:14:59.825289 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-tgvc9"] Feb 16 21:14:59 crc kubenswrapper[4805]: W0216 21:14:59.829567 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f409f3b_50d6_47b1_9abb_e90ba2cc03ab.slice/crio-c3dab34d04392303eee33e1e156b565ad045f30ab2eb54ae75e150772adc80e2 WatchSource:0}: Error finding container c3dab34d04392303eee33e1e156b565ad045f30ab2eb54ae75e150772adc80e2: Status 404 returned error can't find the container with id c3dab34d04392303eee33e1e156b565ad045f30ab2eb54ae75e150772adc80e2 Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.171648 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx"] Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.173584 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.176485 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.176613 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.187145 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx"] Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.270610 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-config-volume\") pod \"collect-profiles-29521275-4vspx\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.270982 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqdt\" (UniqueName: \"kubernetes.io/projected/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-kube-api-access-jhqdt\") pod \"collect-profiles-29521275-4vspx\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.271175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-secret-volume\") pod \"collect-profiles-29521275-4vspx\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.373088 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhqdt\" (UniqueName: \"kubernetes.io/projected/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-kube-api-access-jhqdt\") pod \"collect-profiles-29521275-4vspx\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.373234 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-secret-volume\") pod \"collect-profiles-29521275-4vspx\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.373299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-config-volume\") pod \"collect-profiles-29521275-4vspx\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.374322 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-config-volume\") pod \"collect-profiles-29521275-4vspx\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.396278 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-secret-volume\") pod \"collect-profiles-29521275-4vspx\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.402309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhqdt\" (UniqueName: \"kubernetes.io/projected/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-kube-api-access-jhqdt\") pod \"collect-profiles-29521275-4vspx\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.443436 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tgvc9" event={"ID":"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab","Type":"ContainerStarted","Data":"c3dab34d04392303eee33e1e156b565ad045f30ab2eb54ae75e150772adc80e2"} Feb 16 21:15:00 crc kubenswrapper[4805]: I0216 21:15:00.503357 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:01 crc kubenswrapper[4805]: I0216 21:15:01.012065 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx"] Feb 16 21:15:01 crc kubenswrapper[4805]: I0216 21:15:01.459828 4805 generic.go:334] "Generic (PLEG): container finished" podID="2a65d522-8682-4311-9e3a-d6575370411b" containerID="15700f0c7164c46bd7e13ce8916f9f8fb68f1bc807989a780786c510041073b9" exitCode=0 Feb 16 21:15:01 crc kubenswrapper[4805]: I0216 21:15:01.459927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gnm4t" event={"ID":"2a65d522-8682-4311-9e3a-d6575370411b","Type":"ContainerDied","Data":"15700f0c7164c46bd7e13ce8916f9f8fb68f1bc807989a780786c510041073b9"} Feb 16 21:15:01 crc kubenswrapper[4805]: I0216 21:15:01.462948 4805 generic.go:334] "Generic (PLEG): container finished" podID="52022a46-b370-413d-be8d-de7c5d3ed7af" containerID="4a214a25c55be32d623921c2482bb176cf660c264316621d3e8e0e8fa33cb184" exitCode=0 Feb 16 21:15:01 crc kubenswrapper[4805]: I0216 21:15:01.462983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g867x" event={"ID":"52022a46-b370-413d-be8d-de7c5d3ed7af","Type":"ContainerDied","Data":"4a214a25c55be32d623921c2482bb176cf660c264316621d3e8e0e8fa33cb184"} Feb 16 21:15:01 crc kubenswrapper[4805]: I0216 21:15:01.464839 4805 generic.go:334] "Generic (PLEG): container finished" podID="adc7e606-c1b5-4a97-bca8-21866460d586" containerID="95b1d1f1b9a13c9a6f8e6dcb85025c001b377cb336526c3111ac498121ecf773" exitCode=0 Feb 16 21:15:01 crc kubenswrapper[4805]: I0216 21:15:01.464884 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d910-account-create-update-trmdz" event={"ID":"adc7e606-c1b5-4a97-bca8-21866460d586","Type":"ContainerDied","Data":"95b1d1f1b9a13c9a6f8e6dcb85025c001b377cb336526c3111ac498121ecf773"} Feb 16 21:15:02 crc kubenswrapper[4805]: I0216 21:15:02.844374 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:15:02 crc kubenswrapper[4805]: E0216 21:15:02.845011 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:15:02 crc kubenswrapper[4805]: E0216 21:15:02.845058 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:15:02 crc kubenswrapper[4805]: E0216 21:15:02.845108 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift podName:b51bad1e-99c6-4e2b-ae2b-c7e338ef235e nodeName:}" failed. No retries permitted until 2026-02-16 21:15:10.845090463 +0000 UTC m=+1128.663773758 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift") pod "swift-storage-0" (UID: "b51bad1e-99c6-4e2b-ae2b-c7e338ef235e") : configmap "swift-ring-files" not found Feb 16 21:15:03 crc kubenswrapper[4805]: I0216 21:15:03.971191 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q"] Feb 16 21:15:03 crc kubenswrapper[4805]: I0216 21:15:03.974011 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:03 crc kubenswrapper[4805]: I0216 21:15:03.982148 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q"] Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.105107 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-546hw\" (UniqueName: \"kubernetes.io/projected/e7e67718-c8cf-4669-8b07-36e2fcc68898-kube-api-access-546hw\") pod \"mysqld-exporter-openstack-cell1-db-create-fsm5q\" (UID: \"e7e67718-c8cf-4669-8b07-36e2fcc68898\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.106243 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e67718-c8cf-4669-8b07-36e2fcc68898-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fsm5q\" (UID: \"e7e67718-c8cf-4669-8b07-36e2fcc68898\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.180466 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-6ff9-account-create-update-smp8q"] Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.182176 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.185078 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.192658 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6ff9-account-create-update-smp8q"] Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.209218 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-546hw\" (UniqueName: \"kubernetes.io/projected/e7e67718-c8cf-4669-8b07-36e2fcc68898-kube-api-access-546hw\") pod \"mysqld-exporter-openstack-cell1-db-create-fsm5q\" (UID: \"e7e67718-c8cf-4669-8b07-36e2fcc68898\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.209559 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e67718-c8cf-4669-8b07-36e2fcc68898-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fsm5q\" (UID: \"e7e67718-c8cf-4669-8b07-36e2fcc68898\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.211051 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e67718-c8cf-4669-8b07-36e2fcc68898-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fsm5q\" (UID: \"e7e67718-c8cf-4669-8b07-36e2fcc68898\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.243564 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-546hw\" (UniqueName: \"kubernetes.io/projected/e7e67718-c8cf-4669-8b07-36e2fcc68898-kube-api-access-546hw\") pod \"mysqld-exporter-openstack-cell1-db-create-fsm5q\" (UID: \"e7e67718-c8cf-4669-8b07-36e2fcc68898\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.311315 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feac1e5-959a-468e-905c-62a5a07f98d4-operator-scripts\") pod \"mysqld-exporter-6ff9-account-create-update-smp8q\" (UID: \"7feac1e5-959a-468e-905c-62a5a07f98d4\") " pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.311434 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbdv\" (UniqueName: \"kubernetes.io/projected/7feac1e5-959a-468e-905c-62a5a07f98d4-kube-api-access-8pbdv\") pod \"mysqld-exporter-6ff9-account-create-update-smp8q\" (UID: \"7feac1e5-959a-468e-905c-62a5a07f98d4\") " pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.317904 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.328157 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.393490 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kfgdf"] Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.393707 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-kfgdf" podUID="5eab1109-6fc8-446e-b797-fc5e11e18f5e" containerName="dnsmasq-dns" containerID="cri-o://4a68fee97960fcc52b825c1dd5e18ca82895920228e7a5d63c663af0cd45cfcf" gracePeriod=10 Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.413557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feac1e5-959a-468e-905c-62a5a07f98d4-operator-scripts\") pod \"mysqld-exporter-6ff9-account-create-update-smp8q\" (UID: \"7feac1e5-959a-468e-905c-62a5a07f98d4\") " pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.413679 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbdv\" (UniqueName: \"kubernetes.io/projected/7feac1e5-959a-468e-905c-62a5a07f98d4-kube-api-access-8pbdv\") pod \"mysqld-exporter-6ff9-account-create-update-smp8q\" (UID: \"7feac1e5-959a-468e-905c-62a5a07f98d4\") " pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.415518 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feac1e5-959a-468e-905c-62a5a07f98d4-operator-scripts\") pod \"mysqld-exporter-6ff9-account-create-update-smp8q\" (UID: \"7feac1e5-959a-468e-905c-62a5a07f98d4\") " pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.431097 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbdv\" (UniqueName: \"kubernetes.io/projected/7feac1e5-959a-468e-905c-62a5a07f98d4-kube-api-access-8pbdv\") pod \"mysqld-exporter-6ff9-account-create-update-smp8q\" (UID: \"7feac1e5-959a-468e-905c-62a5a07f98d4\") " pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.508004 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:04 crc kubenswrapper[4805]: W0216 21:15:04.830620 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b3f42a1_7bfb_46d0_9cca_3de49e378aa8.slice/crio-8c5e7e1e3b059735ab51270db972f11fe5c6011cc2c8a3c667a455812c17bd77 WatchSource:0}: Error finding container 8c5e7e1e3b059735ab51270db972f11fe5c6011cc2c8a3c667a455812c17bd77: Status 404 returned error can't find the container with id 8c5e7e1e3b059735ab51270db972f11fe5c6011cc2c8a3c667a455812c17bd77 Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.936161 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.944667 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gnm4t" Feb 16 21:15:04 crc kubenswrapper[4805]: I0216 21:15:04.964332 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g867x" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.024850 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52022a46-b370-413d-be8d-de7c5d3ed7af-operator-scripts\") pod \"52022a46-b370-413d-be8d-de7c5d3ed7af\" (UID: \"52022a46-b370-413d-be8d-de7c5d3ed7af\") " Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.024892 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a65d522-8682-4311-9e3a-d6575370411b-operator-scripts\") pod \"2a65d522-8682-4311-9e3a-d6575370411b\" (UID: \"2a65d522-8682-4311-9e3a-d6575370411b\") " Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.024923 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lctf\" (UniqueName: \"kubernetes.io/projected/2a65d522-8682-4311-9e3a-d6575370411b-kube-api-access-2lctf\") pod \"2a65d522-8682-4311-9e3a-d6575370411b\" (UID: \"2a65d522-8682-4311-9e3a-d6575370411b\") " Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.025061 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v87gx\" (UniqueName: \"kubernetes.io/projected/adc7e606-c1b5-4a97-bca8-21866460d586-kube-api-access-v87gx\") pod \"adc7e606-c1b5-4a97-bca8-21866460d586\" (UID: \"adc7e606-c1b5-4a97-bca8-21866460d586\") " Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.025082 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn4tg\" (UniqueName: \"kubernetes.io/projected/52022a46-b370-413d-be8d-de7c5d3ed7af-kube-api-access-jn4tg\") pod \"52022a46-b370-413d-be8d-de7c5d3ed7af\" (UID: \"52022a46-b370-413d-be8d-de7c5d3ed7af\") " Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.025141 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adc7e606-c1b5-4a97-bca8-21866460d586-operator-scripts\") pod \"adc7e606-c1b5-4a97-bca8-21866460d586\" (UID: \"adc7e606-c1b5-4a97-bca8-21866460d586\") " Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.025449 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52022a46-b370-413d-be8d-de7c5d3ed7af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52022a46-b370-413d-be8d-de7c5d3ed7af" (UID: "52022a46-b370-413d-be8d-de7c5d3ed7af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.025994 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc7e606-c1b5-4a97-bca8-21866460d586-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adc7e606-c1b5-4a97-bca8-21866460d586" (UID: "adc7e606-c1b5-4a97-bca8-21866460d586"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.026260 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adc7e606-c1b5-4a97-bca8-21866460d586-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.026282 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52022a46-b370-413d-be8d-de7c5d3ed7af-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.026758 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a65d522-8682-4311-9e3a-d6575370411b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a65d522-8682-4311-9e3a-d6575370411b" (UID: "2a65d522-8682-4311-9e3a-d6575370411b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.030774 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a65d522-8682-4311-9e3a-d6575370411b-kube-api-access-2lctf" (OuterVolumeSpecName: "kube-api-access-2lctf") pod "2a65d522-8682-4311-9e3a-d6575370411b" (UID: "2a65d522-8682-4311-9e3a-d6575370411b"). InnerVolumeSpecName "kube-api-access-2lctf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.030837 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc7e606-c1b5-4a97-bca8-21866460d586-kube-api-access-v87gx" (OuterVolumeSpecName: "kube-api-access-v87gx") pod "adc7e606-c1b5-4a97-bca8-21866460d586" (UID: "adc7e606-c1b5-4a97-bca8-21866460d586"). InnerVolumeSpecName "kube-api-access-v87gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.043279 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52022a46-b370-413d-be8d-de7c5d3ed7af-kube-api-access-jn4tg" (OuterVolumeSpecName: "kube-api-access-jn4tg") pod "52022a46-b370-413d-be8d-de7c5d3ed7af" (UID: "52022a46-b370-413d-be8d-de7c5d3ed7af"). InnerVolumeSpecName "kube-api-access-jn4tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.128800 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a65d522-8682-4311-9e3a-d6575370411b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.129218 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lctf\" (UniqueName: \"kubernetes.io/projected/2a65d522-8682-4311-9e3a-d6575370411b-kube-api-access-2lctf\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.129230 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v87gx\" (UniqueName: \"kubernetes.io/projected/adc7e606-c1b5-4a97-bca8-21866460d586-kube-api-access-v87gx\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.129239 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn4tg\" (UniqueName: \"kubernetes.io/projected/52022a46-b370-413d-be8d-de7c5d3ed7af-kube-api-access-jn4tg\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.513527 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gnm4t" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.513525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gnm4t" event={"ID":"2a65d522-8682-4311-9e3a-d6575370411b","Type":"ContainerDied","Data":"c88af98bd8a75a4466dd921e321962b51335bba5f9441c5081abe78d471295e1"} Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.513588 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88af98bd8a75a4466dd921e321962b51335bba5f9441c5081abe78d471295e1" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.516624 4805 generic.go:334] "Generic (PLEG): container finished" podID="5eab1109-6fc8-446e-b797-fc5e11e18f5e" containerID="4a68fee97960fcc52b825c1dd5e18ca82895920228e7a5d63c663af0cd45cfcf" exitCode=0 Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.516698 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kfgdf" event={"ID":"5eab1109-6fc8-446e-b797-fc5e11e18f5e","Type":"ContainerDied","Data":"4a68fee97960fcc52b825c1dd5e18ca82895920228e7a5d63c663af0cd45cfcf"} Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.520009 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g867x" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.519962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g867x" event={"ID":"52022a46-b370-413d-be8d-de7c5d3ed7af","Type":"ContainerDied","Data":"f23a74b8c2d6a41ee1712a882b0eb685640824ea2074b7b181c84a4dec97ea1b"} Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.520119 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f23a74b8c2d6a41ee1712a882b0eb685640824ea2074b7b181c84a4dec97ea1b" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.521859 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d910-account-create-update-trmdz" event={"ID":"adc7e606-c1b5-4a97-bca8-21866460d586","Type":"ContainerDied","Data":"ecf78b5e8c83ee96ae0b81e89b92c095276650f60d03a0f9c38c42d3b5601631"} Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.521892 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf78b5e8c83ee96ae0b81e89b92c095276650f60d03a0f9c38c42d3b5601631" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.521920 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d910-account-create-update-trmdz" Feb 16 21:15:05 crc kubenswrapper[4805]: I0216 21:15:05.523525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" event={"ID":"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8","Type":"ContainerStarted","Data":"8c5e7e1e3b059735ab51270db972f11fe5c6011cc2c8a3c667a455812c17bd77"} Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.710708 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6rt5d"] Feb 16 21:15:06 crc kubenswrapper[4805]: E0216 21:15:06.711513 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc7e606-c1b5-4a97-bca8-21866460d586" containerName="mariadb-account-create-update" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.711529 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc7e606-c1b5-4a97-bca8-21866460d586" containerName="mariadb-account-create-update" Feb 16 21:15:06 crc kubenswrapper[4805]: E0216 21:15:06.711545 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a65d522-8682-4311-9e3a-d6575370411b" containerName="mariadb-account-create-update" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.711551 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a65d522-8682-4311-9e3a-d6575370411b" containerName="mariadb-account-create-update" Feb 16 21:15:06 crc kubenswrapper[4805]: E0216 21:15:06.711569 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52022a46-b370-413d-be8d-de7c5d3ed7af" containerName="mariadb-database-create" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.711575 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="52022a46-b370-413d-be8d-de7c5d3ed7af" containerName="mariadb-database-create" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.711805 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a65d522-8682-4311-9e3a-d6575370411b" containerName="mariadb-account-create-update" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.711822 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc7e606-c1b5-4a97-bca8-21866460d586" containerName="mariadb-account-create-update" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.711834 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="52022a46-b370-413d-be8d-de7c5d3ed7af" containerName="mariadb-database-create" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.712607 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.715244 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.715638 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hrrrc" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.721892 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6rt5d"] Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.762986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-config-data\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.763128 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-combined-ca-bundle\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.763485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-db-sync-config-data\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.763610 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954fm\" (UniqueName: \"kubernetes.io/projected/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-kube-api-access-954fm\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.866357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-config-data\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.866445 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-combined-ca-bundle\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.867406 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-db-sync-config-data\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.867454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-954fm\" (UniqueName: \"kubernetes.io/projected/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-kube-api-access-954fm\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.873777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-config-data\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.874516 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-combined-ca-bundle\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.882714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-db-sync-config-data\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:06 crc kubenswrapper[4805]: I0216 21:15:06.884668 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-954fm\" (UniqueName: \"kubernetes.io/projected/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-kube-api-access-954fm\") pod \"glance-db-sync-6rt5d\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.041270 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.207143 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.290432 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-dns-svc\") pod \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.290490 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj8jt\" (UniqueName: \"kubernetes.io/projected/5eab1109-6fc8-446e-b797-fc5e11e18f5e-kube-api-access-nj8jt\") pod \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.290534 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-config\") pod \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.290586 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-sb\") pod \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.290636 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-nb\") pod \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\" (UID: \"5eab1109-6fc8-446e-b797-fc5e11e18f5e\") " Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.295931 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eab1109-6fc8-446e-b797-fc5e11e18f5e-kube-api-access-nj8jt" (OuterVolumeSpecName: "kube-api-access-nj8jt") pod "5eab1109-6fc8-446e-b797-fc5e11e18f5e" (UID: "5eab1109-6fc8-446e-b797-fc5e11e18f5e"). InnerVolumeSpecName "kube-api-access-nj8jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.397425 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj8jt\" (UniqueName: \"kubernetes.io/projected/5eab1109-6fc8-446e-b797-fc5e11e18f5e-kube-api-access-nj8jt\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.486102 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5eab1109-6fc8-446e-b797-fc5e11e18f5e" (UID: "5eab1109-6fc8-446e-b797-fc5e11e18f5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.498982 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.538065 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5eab1109-6fc8-446e-b797-fc5e11e18f5e" (UID: "5eab1109-6fc8-446e-b797-fc5e11e18f5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.545638 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5eab1109-6fc8-446e-b797-fc5e11e18f5e" (UID: "5eab1109-6fc8-446e-b797-fc5e11e18f5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.554360 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kfgdf" event={"ID":"5eab1109-6fc8-446e-b797-fc5e11e18f5e","Type":"ContainerDied","Data":"cc25d67a75fd325e7fad9fc86d1ada38d1f36600d23ff54b6ede22f6d44f6dca"} Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.554410 4805 scope.go:117] "RemoveContainer" containerID="4a68fee97960fcc52b825c1dd5e18ca82895920228e7a5d63c663af0cd45cfcf" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.554503 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kfgdf" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.557269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerStarted","Data":"63b5c23bfc15b4e8f364513e1eae33a8da3477a30592740fa648b615d34934f7"} Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.558861 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-config" (OuterVolumeSpecName: "config") pod "5eab1109-6fc8-446e-b797-fc5e11e18f5e" (UID: "5eab1109-6fc8-446e-b797-fc5e11e18f5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.559421 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" event={"ID":"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8","Type":"ContainerStarted","Data":"c7dda31eddc9cc163d3c47e7b44967fc4364b95ed92c7547082337f67eb8836c"} Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.580181 4805 scope.go:117] "RemoveContainer" containerID="d9917d7bd2194f829e23c64cc0b8c56918b60973eb15604517406ef67f3e83e7" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.583221 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" podStartSLOduration=7.583208762 podStartE2EDuration="7.583208762s" podCreationTimestamp="2026-02-16 21:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:07.580354384 +0000 UTC m=+1125.399037679" watchObservedRunningTime="2026-02-16 21:15:07.583208762 +0000 UTC m=+1125.401892057" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.600825 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.600857 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.600879 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eab1109-6fc8-446e-b797-fc5e11e18f5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.703928 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6ff9-account-create-update-smp8q"] Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.770160 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q"] Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.843756 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6rt5d"] Feb 16 21:15:07 crc kubenswrapper[4805]: W0216 21:15:07.851674 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37c2e4e_f2f5_44a3_886b_91fa1a4d4ff8.slice/crio-16318824fe45e7ba0016fb93c6e415e897e9b3a9228b6a551c30c6c8de046c59 WatchSource:0}: Error finding container 16318824fe45e7ba0016fb93c6e415e897e9b3a9228b6a551c30c6c8de046c59: Status 404 returned error can't find the container with id 16318824fe45e7ba0016fb93c6e415e897e9b3a9228b6a551c30c6c8de046c59 Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.891472 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kfgdf"] Feb 16 21:15:07 crc kubenswrapper[4805]: I0216 21:15:07.895904 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kfgdf"] Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.100144 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.100484 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.100545 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.101500 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f1616af32f423ba92145c911bf150c6fe834753890981f8e09fc4faccf82ee6"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.101591 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://5f1616af32f423ba92145c911bf150c6fe834753890981f8e09fc4faccf82ee6" gracePeriod=600 Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.573234 4805 generic.go:334] "Generic (PLEG): container finished" podID="6b3f42a1-7bfb-46d0-9cca-3de49e378aa8" containerID="c7dda31eddc9cc163d3c47e7b44967fc4364b95ed92c7547082337f67eb8836c" exitCode=0 Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.573324 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" event={"ID":"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8","Type":"ContainerDied","Data":"c7dda31eddc9cc163d3c47e7b44967fc4364b95ed92c7547082337f67eb8836c"} Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.576480 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="5f1616af32f423ba92145c911bf150c6fe834753890981f8e09fc4faccf82ee6" exitCode=0 Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.576573 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"5f1616af32f423ba92145c911bf150c6fe834753890981f8e09fc4faccf82ee6"} Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.576635 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"3695f3bf70d1d75f31deaf59ecf0f2732a5f8a503501ca8da83dcad9ebd6dcda"} Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.576704 4805 scope.go:117] "RemoveContainer" containerID="e746550f7cf0d50be9739ce7e97b17ef93c5c8ee315aa0d1535183b0c6cfe9db" Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.579599 4805 generic.go:334] "Generic (PLEG): container finished" podID="e7e67718-c8cf-4669-8b07-36e2fcc68898" containerID="da8ae59ebf53b3be9f879b8fcbfddf58116281b591e167acdaba570d7693881b" exitCode=0 Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.579653 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" event={"ID":"e7e67718-c8cf-4669-8b07-36e2fcc68898","Type":"ContainerDied","Data":"da8ae59ebf53b3be9f879b8fcbfddf58116281b591e167acdaba570d7693881b"} Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.579673 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" event={"ID":"e7e67718-c8cf-4669-8b07-36e2fcc68898","Type":"ContainerStarted","Data":"24ff4b1551a4bbf1a21f1f33fffe4c117b21fd1d0a36cb148f1b9db02b52ad95"} Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.591911 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tgvc9" event={"ID":"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab","Type":"ContainerStarted","Data":"c458b1450a00dcf300950a59da7b8db30288aa80786a4559c309ae701b581177"} Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.595460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6rt5d" event={"ID":"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8","Type":"ContainerStarted","Data":"16318824fe45e7ba0016fb93c6e415e897e9b3a9228b6a551c30c6c8de046c59"} Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.597386 4805 generic.go:334] "Generic (PLEG): container finished" podID="7feac1e5-959a-468e-905c-62a5a07f98d4" containerID="2104153eb99c88ca9dddf4d1a825a38debe26da9e10448b6359e306b4d3a5d60" exitCode=0 Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.597435 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" event={"ID":"7feac1e5-959a-468e-905c-62a5a07f98d4","Type":"ContainerDied","Data":"2104153eb99c88ca9dddf4d1a825a38debe26da9e10448b6359e306b4d3a5d60"} Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.597462 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" event={"ID":"7feac1e5-959a-468e-905c-62a5a07f98d4","Type":"ContainerStarted","Data":"b42432f7620839a36869c98f9bc7a0f3e2bc657ef53e7db4fcbad6876405178a"} Feb 16 21:15:08 crc kubenswrapper[4805]: I0216 21:15:08.677658 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-tgvc9" podStartSLOduration=3.325079544 podStartE2EDuration="10.677636022s" podCreationTimestamp="2026-02-16 21:14:58 +0000 UTC" firstStartedPulling="2026-02-16 21:14:59.832406158 +0000 UTC m=+1117.651089453" lastFinishedPulling="2026-02-16 21:15:07.184962636 +0000 UTC m=+1125.003645931" observedRunningTime="2026-02-16 21:15:08.663861012 +0000 UTC m=+1126.482544317" watchObservedRunningTime="2026-02-16 21:15:08.677636022 +0000 UTC m=+1126.496319317" Feb 16 21:15:09 crc kubenswrapper[4805]: I0216 21:15:09.612664 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eab1109-6fc8-446e-b797-fc5e11e18f5e" path="/var/lib/kubelet/pods/5eab1109-6fc8-446e-b797-fc5e11e18f5e/volumes" Feb 16 21:15:09 crc kubenswrapper[4805]: I0216 21:15:09.620498 4805 generic.go:334] "Generic (PLEG): container finished" podID="95a93760-333e-4689-a64c-c3534a04cec0" containerID="7aeeed8f72d2e51caa4f2b0119cd92aa83ce279f4caef23c61ee0897a9f4e84f" exitCode=0 Feb 16 21:15:09 crc kubenswrapper[4805]: I0216 21:15:09.620609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"95a93760-333e-4689-a64c-c3534a04cec0","Type":"ContainerDied","Data":"7aeeed8f72d2e51caa4f2b0119cd92aa83ce279f4caef23c61ee0897a9f4e84f"} Feb 16 21:15:09 crc kubenswrapper[4805]: I0216 21:15:09.688116 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gnm4t"] Feb 16 21:15:09 crc kubenswrapper[4805]: I0216 21:15:09.697334 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gnm4t"] Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.303062 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.316079 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.369623 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.370630 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-config-volume\") pod \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.370713 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhqdt\" (UniqueName: \"kubernetes.io/projected/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-kube-api-access-jhqdt\") pod \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.371614 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-config-volume" (OuterVolumeSpecName: "config-volume") pod "6b3f42a1-7bfb-46d0-9cca-3de49e378aa8" (UID: "6b3f42a1-7bfb-46d0-9cca-3de49e378aa8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.371926 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pbdv\" (UniqueName: \"kubernetes.io/projected/7feac1e5-959a-468e-905c-62a5a07f98d4-kube-api-access-8pbdv\") pod \"7feac1e5-959a-468e-905c-62a5a07f98d4\" (UID: \"7feac1e5-959a-468e-905c-62a5a07f98d4\") " Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.371973 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feac1e5-959a-468e-905c-62a5a07f98d4-operator-scripts\") pod \"7feac1e5-959a-468e-905c-62a5a07f98d4\" (UID: \"7feac1e5-959a-468e-905c-62a5a07f98d4\") " Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.372533 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7feac1e5-959a-468e-905c-62a5a07f98d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7feac1e5-959a-468e-905c-62a5a07f98d4" (UID: "7feac1e5-959a-468e-905c-62a5a07f98d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.372651 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-secret-volume\") pod \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\" (UID: \"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8\") " Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.373836 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.373874 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feac1e5-959a-468e-905c-62a5a07f98d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.379504 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6b3f42a1-7bfb-46d0-9cca-3de49e378aa8" (UID: "6b3f42a1-7bfb-46d0-9cca-3de49e378aa8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.379619 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-kube-api-access-jhqdt" (OuterVolumeSpecName: "kube-api-access-jhqdt") pod "6b3f42a1-7bfb-46d0-9cca-3de49e378aa8" (UID: "6b3f42a1-7bfb-46d0-9cca-3de49e378aa8"). InnerVolumeSpecName "kube-api-access-jhqdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.380557 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7feac1e5-959a-468e-905c-62a5a07f98d4-kube-api-access-8pbdv" (OuterVolumeSpecName: "kube-api-access-8pbdv") pod "7feac1e5-959a-468e-905c-62a5a07f98d4" (UID: "7feac1e5-959a-468e-905c-62a5a07f98d4"). InnerVolumeSpecName "kube-api-access-8pbdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.475251 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-546hw\" (UniqueName: \"kubernetes.io/projected/e7e67718-c8cf-4669-8b07-36e2fcc68898-kube-api-access-546hw\") pod \"e7e67718-c8cf-4669-8b07-36e2fcc68898\" (UID: \"e7e67718-c8cf-4669-8b07-36e2fcc68898\") " Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.475344 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e67718-c8cf-4669-8b07-36e2fcc68898-operator-scripts\") pod \"e7e67718-c8cf-4669-8b07-36e2fcc68898\" (UID: \"e7e67718-c8cf-4669-8b07-36e2fcc68898\") " Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.475958 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhqdt\" (UniqueName: \"kubernetes.io/projected/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-kube-api-access-jhqdt\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.475980 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pbdv\" (UniqueName: \"kubernetes.io/projected/7feac1e5-959a-468e-905c-62a5a07f98d4-kube-api-access-8pbdv\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.475989 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.476033 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e67718-c8cf-4669-8b07-36e2fcc68898-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7e67718-c8cf-4669-8b07-36e2fcc68898" (UID: "e7e67718-c8cf-4669-8b07-36e2fcc68898"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.478509 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e67718-c8cf-4669-8b07-36e2fcc68898-kube-api-access-546hw" (OuterVolumeSpecName: "kube-api-access-546hw") pod "e7e67718-c8cf-4669-8b07-36e2fcc68898" (UID: "e7e67718-c8cf-4669-8b07-36e2fcc68898"). InnerVolumeSpecName "kube-api-access-546hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.578365 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-546hw\" (UniqueName: \"kubernetes.io/projected/e7e67718-c8cf-4669-8b07-36e2fcc68898-kube-api-access-546hw\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.578403 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7e67718-c8cf-4669-8b07-36e2fcc68898-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.636603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" event={"ID":"7feac1e5-959a-468e-905c-62a5a07f98d4","Type":"ContainerDied","Data":"b42432f7620839a36869c98f9bc7a0f3e2bc657ef53e7db4fcbad6876405178a"} Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.636635 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6ff9-account-create-update-smp8q" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.636659 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42432f7620839a36869c98f9bc7a0f3e2bc657ef53e7db4fcbad6876405178a" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.638826 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" event={"ID":"6b3f42a1-7bfb-46d0-9cca-3de49e378aa8","Type":"ContainerDied","Data":"8c5e7e1e3b059735ab51270db972f11fe5c6011cc2c8a3c667a455812c17bd77"} Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.638870 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c5e7e1e3b059735ab51270db972f11fe5c6011cc2c8a3c667a455812c17bd77" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.638938 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.640744 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" event={"ID":"e7e67718-c8cf-4669-8b07-36e2fcc68898","Type":"ContainerDied","Data":"24ff4b1551a4bbf1a21f1f33fffe4c117b21fd1d0a36cb148f1b9db02b52ad95"} Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.640774 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ff4b1551a4bbf1a21f1f33fffe4c117b21fd1d0a36cb148f1b9db02b52ad95" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.640832 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.644802 4805 generic.go:334] "Generic (PLEG): container finished" podID="7f897110-86a6-4edb-a453-a1322e0a580f" containerID="450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9" exitCode=0 Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.644869 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f897110-86a6-4edb-a453-a1322e0a580f","Type":"ContainerDied","Data":"450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9"} Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.646515 4805 generic.go:334] "Generic (PLEG): container finished" podID="8a48053f-4668-43af-bda4-7af014d6457d" containerID="bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d" exitCode=0 Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.646605 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a48053f-4668-43af-bda4-7af014d6457d","Type":"ContainerDied","Data":"bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d"} Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.650221 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"95a93760-333e-4689-a64c-c3534a04cec0","Type":"ContainerStarted","Data":"5e7b36a5647fdf2ee5ecfce9eb3f96cc0f4ba00eab7f3453e540f0d09a432559"} Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.650927 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.656896 4805 generic.go:334] "Generic (PLEG): container finished" podID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" containerID="5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877" exitCode=0 Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.657016 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14fe6c77-adbd-4abe-9aff-7bb72474d47b","Type":"ContainerDied","Data":"5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877"} Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.670764 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerStarted","Data":"677139dfb2585d763bd4a1cfafc5c9f8702da0d814e269e88e41e2f748be9181"} Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.702338 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=40.960701494 podStartE2EDuration="1m4.702321998s" podCreationTimestamp="2026-02-16 21:14:06 +0000 UTC" firstStartedPulling="2026-02-16 21:14:09.387736621 +0000 UTC m=+1067.206419916" lastFinishedPulling="2026-02-16 21:14:33.129357085 +0000 UTC m=+1090.948040420" observedRunningTime="2026-02-16 21:15:10.689036301 +0000 UTC m=+1128.507719596" watchObservedRunningTime="2026-02-16 21:15:10.702321998 +0000 UTC m=+1128.521005293" Feb 16 21:15:10 crc kubenswrapper[4805]: I0216 21:15:10.884211 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:15:10 crc kubenswrapper[4805]: E0216 21:15:10.884420 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:15:10 crc kubenswrapper[4805]: E0216 21:15:10.884575 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:15:10 crc kubenswrapper[4805]: E0216 21:15:10.884631 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift podName:b51bad1e-99c6-4e2b-ae2b-c7e338ef235e nodeName:}" failed. No retries permitted until 2026-02-16 21:15:26.884613831 +0000 UTC m=+1144.703297126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift") pod "swift-storage-0" (UID: "b51bad1e-99c6-4e2b-ae2b-c7e338ef235e") : configmap "swift-ring-files" not found Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.106897 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7cb64748c-ggr6n" podUID="6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" containerName="console" containerID="cri-o://cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66" gracePeriod=15 Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.617104 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a65d522-8682-4311-9e3a-d6575370411b" path="/var/lib/kubelet/pods/2a65d522-8682-4311-9e3a-d6575370411b/volumes" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.617986 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.642513 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cb64748c-ggr6n_6ba8a3e9-b193-4ec2-983d-e5ce4efd302f/console/0.log" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.642578 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.697258 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-oauth-serving-cert\") pod \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.697311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cstgl\" (UniqueName: \"kubernetes.io/projected/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-kube-api-access-cstgl\") pod \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.697404 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-serving-cert\") pod \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.697435 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-trusted-ca-bundle\") pod \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.697475 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-service-ca\") pod \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.697530 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-config\") pod \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.697550 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-oauth-config\") pod \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\" (UID: \"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f\") " Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.699121 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" (UID: "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.699349 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-config" (OuterVolumeSpecName: "console-config") pod "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" (UID: "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.700360 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-service-ca" (OuterVolumeSpecName: "service-ca") pod "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" (UID: "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.700509 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" (UID: "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.701205 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f897110-86a6-4edb-a453-a1322e0a580f","Type":"ContainerStarted","Data":"4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272"} Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.701399 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.703794 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" (UID: "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.704700 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cb64748c-ggr6n_6ba8a3e9-b193-4ec2-983d-e5ce4efd302f/console/0.log" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.704953 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cb64748c-ggr6n" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.704838 4805 generic.go:334] "Generic (PLEG): container finished" podID="6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" containerID="cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66" exitCode=2 Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.705233 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cb64748c-ggr6n" event={"ID":"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f","Type":"ContainerDied","Data":"cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66"} Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.705258 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cb64748c-ggr6n" event={"ID":"6ba8a3e9-b193-4ec2-983d-e5ce4efd302f","Type":"ContainerDied","Data":"8791cf5b98e122f0b32c99941ec157eca6c2101af35c57577a8f53a3c29db66e"} Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.705287 4805 scope.go:117] "RemoveContainer" containerID="cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.708340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a48053f-4668-43af-bda4-7af014d6457d","Type":"ContainerStarted","Data":"b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5"} Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.708676 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.709333 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-kube-api-access-cstgl" (OuterVolumeSpecName: "kube-api-access-cstgl") pod "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" (UID: "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f"). InnerVolumeSpecName "kube-api-access-cstgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.710694 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14fe6c77-adbd-4abe-9aff-7bb72474d47b","Type":"ContainerStarted","Data":"cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1"} Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.710980 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.722777 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" (UID: "6ba8a3e9-b193-4ec2-983d-e5ce4efd302f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.730281 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.283574717 podStartE2EDuration="1m4.730260481s" podCreationTimestamp="2026-02-16 21:14:07 +0000 UTC" firstStartedPulling="2026-02-16 21:14:09.478570517 +0000 UTC m=+1067.297253812" lastFinishedPulling="2026-02-16 21:14:34.925256281 +0000 UTC m=+1092.743939576" observedRunningTime="2026-02-16 21:15:11.724166503 +0000 UTC m=+1129.542849788" watchObservedRunningTime="2026-02-16 21:15:11.730260481 +0000 UTC m=+1129.548943776" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.764234 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=39.97760513 podStartE2EDuration="1m5.764214049s" podCreationTimestamp="2026-02-16 21:14:06 +0000 UTC" firstStartedPulling="2026-02-16 21:14:09.385552551 +0000 UTC m=+1067.204235846" lastFinishedPulling="2026-02-16 21:14:35.17216147 +0000 UTC m=+1092.990844765" observedRunningTime="2026-02-16 21:15:11.752794403 +0000 UTC m=+1129.571477698" watchObservedRunningTime="2026-02-16 21:15:11.764214049 +0000 UTC m=+1129.582897344" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.776169 4805 scope.go:117] "RemoveContainer" containerID="cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66" Feb 16 21:15:11 crc kubenswrapper[4805]: E0216 21:15:11.780089 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66\": container with ID starting with cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66 not found: ID does not exist" containerID="cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.780121 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66"} err="failed to get container status \"cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66\": rpc error: code = NotFound desc = could not find container \"cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66\": container with ID starting with cef97d782bc7b02753f727fae807df3a2889462afdd94c431e3fa009201dae66 not found: ID does not exist" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.799499 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.799536 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.799546 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.799555 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.799563 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.799572 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.799580 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cstgl\" (UniqueName: \"kubernetes.io/projected/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f-kube-api-access-cstgl\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:11 crc kubenswrapper[4805]: I0216 21:15:11.802326 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.065338595 podStartE2EDuration="1m5.802308511s" podCreationTimestamp="2026-02-16 21:14:06 +0000 UTC" firstStartedPulling="2026-02-16 21:14:09.436414927 +0000 UTC m=+1067.255098222" lastFinishedPulling="2026-02-16 21:14:35.173384843 +0000 UTC m=+1092.992068138" observedRunningTime="2026-02-16 21:15:11.789482427 +0000 UTC m=+1129.608165722" watchObservedRunningTime="2026-02-16 21:15:11.802308511 +0000 UTC m=+1129.620991806" Feb 16 21:15:12 crc kubenswrapper[4805]: I0216 21:15:12.040173 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cb64748c-ggr6n"] Feb 16 21:15:12 crc kubenswrapper[4805]: I0216 21:15:12.049784 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7cb64748c-ggr6n"] Feb 16 21:15:12 crc kubenswrapper[4805]: I0216 21:15:12.223834 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ntwbd" podUID="127a1d16-9779-4760-88eb-28d61312ef0f" containerName="ovn-controller" probeResult="failure" output=< Feb 16 21:15:12 crc kubenswrapper[4805]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 21:15:12 crc kubenswrapper[4805]: > Feb 16 21:15:13 crc kubenswrapper[4805]: I0216 21:15:13.613289 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" path="/var/lib/kubelet/pods/6ba8a3e9-b193-4ec2-983d-e5ce4efd302f/volumes" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369195 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:15:14 crc kubenswrapper[4805]: E0216 21:15:14.369581 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eab1109-6fc8-446e-b797-fc5e11e18f5e" containerName="dnsmasq-dns" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369594 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eab1109-6fc8-446e-b797-fc5e11e18f5e" containerName="dnsmasq-dns" Feb 16 21:15:14 crc kubenswrapper[4805]: E0216 21:15:14.369607 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7feac1e5-959a-468e-905c-62a5a07f98d4" containerName="mariadb-account-create-update" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369613 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7feac1e5-959a-468e-905c-62a5a07f98d4" containerName="mariadb-account-create-update" Feb 16 21:15:14 crc kubenswrapper[4805]: E0216 21:15:14.369626 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e67718-c8cf-4669-8b07-36e2fcc68898" containerName="mariadb-database-create" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369632 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e67718-c8cf-4669-8b07-36e2fcc68898" containerName="mariadb-database-create" Feb 16 21:15:14 crc kubenswrapper[4805]: E0216 21:15:14.369641 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eab1109-6fc8-446e-b797-fc5e11e18f5e" containerName="init" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369648 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eab1109-6fc8-446e-b797-fc5e11e18f5e" containerName="init" Feb 16 21:15:14 crc kubenswrapper[4805]: E0216 21:15:14.369659 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" containerName="console" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369666 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" containerName="console" Feb 16 21:15:14 crc kubenswrapper[4805]: E0216 21:15:14.369688 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b3f42a1-7bfb-46d0-9cca-3de49e378aa8" containerName="collect-profiles" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369694 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3f42a1-7bfb-46d0-9cca-3de49e378aa8" containerName="collect-profiles" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369889 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b3f42a1-7bfb-46d0-9cca-3de49e378aa8" containerName="collect-profiles" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369900 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7feac1e5-959a-468e-905c-62a5a07f98d4" containerName="mariadb-account-create-update" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369911 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba8a3e9-b193-4ec2-983d-e5ce4efd302f" containerName="console" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369929 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e67718-c8cf-4669-8b07-36e2fcc68898" containerName="mariadb-database-create" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.369943 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eab1109-6fc8-446e-b797-fc5e11e18f5e" containerName="dnsmasq-dns" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.370558 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.373580 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.382948 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.451607 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-config-data\") pod \"mysqld-exporter-0\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.451738 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.451790 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vwr2\" (UniqueName: \"kubernetes.io/projected/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-kube-api-access-6vwr2\") pod \"mysqld-exporter-0\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.554161 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-config-data\") pod \"mysqld-exporter-0\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.554295 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.554342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vwr2\" (UniqueName: \"kubernetes.io/projected/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-kube-api-access-6vwr2\") pod \"mysqld-exporter-0\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.561313 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-config-data\") pod \"mysqld-exporter-0\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.568623 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.570397 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vwr2\" (UniqueName: \"kubernetes.io/projected/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-kube-api-access-6vwr2\") pod \"mysqld-exporter-0\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.683339 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q2dfh"] Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.693394 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.698449 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.707631 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.739672 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q2dfh"] Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.746298 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" containerID="c458b1450a00dcf300950a59da7b8db30288aa80786a4559c309ae701b581177" exitCode=0 Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.746547 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tgvc9" event={"ID":"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab","Type":"ContainerDied","Data":"c458b1450a00dcf300950a59da7b8db30288aa80786a4559c309ae701b581177"} Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.760599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e172a03-3c54-4817-954f-247328c52578-operator-scripts\") pod \"root-account-create-update-q2dfh\" (UID: \"5e172a03-3c54-4817-954f-247328c52578\") " pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.760917 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjps6\" (UniqueName: \"kubernetes.io/projected/5e172a03-3c54-4817-954f-247328c52578-kube-api-access-hjps6\") pod \"root-account-create-update-q2dfh\" (UID: \"5e172a03-3c54-4817-954f-247328c52578\") " pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.862859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e172a03-3c54-4817-954f-247328c52578-operator-scripts\") pod \"root-account-create-update-q2dfh\" (UID: \"5e172a03-3c54-4817-954f-247328c52578\") " pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.862941 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjps6\" (UniqueName: \"kubernetes.io/projected/5e172a03-3c54-4817-954f-247328c52578-kube-api-access-hjps6\") pod \"root-account-create-update-q2dfh\" (UID: \"5e172a03-3c54-4817-954f-247328c52578\") " pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.863979 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e172a03-3c54-4817-954f-247328c52578-operator-scripts\") pod \"root-account-create-update-q2dfh\" (UID: \"5e172a03-3c54-4817-954f-247328c52578\") " pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:14 crc kubenswrapper[4805]: I0216 21:15:14.883802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjps6\" (UniqueName: \"kubernetes.io/projected/5e172a03-3c54-4817-954f-247328c52578-kube-api-access-hjps6\") pod \"root-account-create-update-q2dfh\" (UID: \"5e172a03-3c54-4817-954f-247328c52578\") " pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:15 crc kubenswrapper[4805]: I0216 21:15:15.032672 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.221075 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ntwbd" podUID="127a1d16-9779-4760-88eb-28d61312ef0f" containerName="ovn-controller" probeResult="failure" output=< Feb 16 21:15:17 crc kubenswrapper[4805]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 21:15:17 crc kubenswrapper[4805]: > Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.248301 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.255316 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jtmkd" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.486638 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ntwbd-config-rpvvg"] Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.488409 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.506403 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ntwbd-config-rpvvg"] Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.508566 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.617995 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-additional-scripts\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.618053 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-scripts\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.618172 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w5wg\" (UniqueName: \"kubernetes.io/projected/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-kube-api-access-4w5wg\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.618235 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.618349 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-log-ovn\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.618452 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run-ovn\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.720341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-additional-scripts\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.720400 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-scripts\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.720464 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w5wg\" (UniqueName: \"kubernetes.io/projected/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-kube-api-access-4w5wg\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.720499 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.720606 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-log-ovn\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.720669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run-ovn\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.721039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run-ovn\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.721098 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.721779 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-additional-scripts\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.722241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-log-ovn\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.725253 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-scripts\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.743384 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w5wg\" (UniqueName: \"kubernetes.io/projected/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-kube-api-access-4w5wg\") pod \"ovn-controller-ntwbd-config-rpvvg\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:17 crc kubenswrapper[4805]: I0216 21:15:17.812406 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.393448 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.527416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-swiftconf\") pod \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.527486 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9qcc\" (UniqueName: \"kubernetes.io/projected/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-kube-api-access-n9qcc\") pod \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.527539 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-etc-swift\") pod \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.527580 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-scripts\") pod \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.527602 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-ring-data-devices\") pod \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.527777 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-combined-ca-bundle\") pod \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.527819 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-dispersionconf\") pod \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\" (UID: \"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab\") " Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.530458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" (UID: "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.532221 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" (UID: "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.537104 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-kube-api-access-n9qcc" (OuterVolumeSpecName: "kube-api-access-n9qcc") pod "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" (UID: "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab"). InnerVolumeSpecName "kube-api-access-n9qcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.539762 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" (UID: "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.567327 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-scripts" (OuterVolumeSpecName: "scripts") pod "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" (UID: "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.576422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" (UID: "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.582937 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" (UID: "8f409f3b-50d6-47b1-9abb-e90ba2cc03ab"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.630372 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.630405 4805 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.630416 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.630426 4805 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.630435 4805 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.630443 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9qcc\" (UniqueName: \"kubernetes.io/projected/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-kube-api-access-n9qcc\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.630452 4805 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f409f3b-50d6-47b1-9abb-e90ba2cc03ab-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.724999 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.822320 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q2dfh"] Feb 16 21:15:21 crc kubenswrapper[4805]: W0216 21:15:21.826572 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e172a03_3c54_4817_954f_247328c52578.slice/crio-d91efa847702265f8e075895d187494e2bb88ab64482b3fbad035b3855664cf0 WatchSource:0}: Error finding container d91efa847702265f8e075895d187494e2bb88ab64482b3fbad035b3855664cf0: Status 404 returned error can't find the container with id d91efa847702265f8e075895d187494e2bb88ab64482b3fbad035b3855664cf0 Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.850027 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tgvc9" event={"ID":"8f409f3b-50d6-47b1-9abb-e90ba2cc03ab","Type":"ContainerDied","Data":"c3dab34d04392303eee33e1e156b565ad045f30ab2eb54ae75e150772adc80e2"} Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.850059 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tgvc9" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.850077 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3dab34d04392303eee33e1e156b565ad045f30ab2eb54ae75e150772adc80e2" Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.854901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerStarted","Data":"7dacd078f3b71b47858926f5f4d988803ee3f912be9ea9852307a88e80d73e22"} Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.856992 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q2dfh" event={"ID":"5e172a03-3c54-4817-954f-247328c52578","Type":"ContainerStarted","Data":"d91efa847702265f8e075895d187494e2bb88ab64482b3fbad035b3855664cf0"} Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.858051 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ntwbd-config-rpvvg"] Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.860000 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9","Type":"ContainerStarted","Data":"3204afaa1daa57b07cc7eb6b3bacd49d64672afcf655cc4dfc546b7b0616d570"} Feb 16 21:15:21 crc kubenswrapper[4805]: I0216 21:15:21.914955 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.763683601 podStartE2EDuration="1m8.91493413s" podCreationTimestamp="2026-02-16 21:14:13 +0000 UTC" firstStartedPulling="2026-02-16 21:14:36.046082827 +0000 UTC m=+1093.864766122" lastFinishedPulling="2026-02-16 21:15:21.197333356 +0000 UTC m=+1139.016016651" observedRunningTime="2026-02-16 21:15:21.907971978 +0000 UTC m=+1139.726655273" watchObservedRunningTime="2026-02-16 21:15:21.91493413 +0000 UTC m=+1139.733617415" Feb 16 21:15:22 crc kubenswrapper[4805]: I0216 21:15:22.253125 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ntwbd" Feb 16 21:15:22 crc kubenswrapper[4805]: I0216 21:15:22.883060 4805 generic.go:334] "Generic (PLEG): container finished" podID="5e172a03-3c54-4817-954f-247328c52578" containerID="67f00f3c04a6a3aa4e15e1801aa240b269ee95a26fc0e780bc78b374663a9b02" exitCode=0 Feb 16 21:15:22 crc kubenswrapper[4805]: I0216 21:15:22.883097 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q2dfh" event={"ID":"5e172a03-3c54-4817-954f-247328c52578","Type":"ContainerDied","Data":"67f00f3c04a6a3aa4e15e1801aa240b269ee95a26fc0e780bc78b374663a9b02"} Feb 16 21:15:22 crc kubenswrapper[4805]: I0216 21:15:22.885118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6rt5d" event={"ID":"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8","Type":"ContainerStarted","Data":"3706d4fbe325e1fe22de6809fa8f9cfa06446baaedc6cab54ab55e2e12834f45"} Feb 16 21:15:22 crc kubenswrapper[4805]: I0216 21:15:22.889032 4805 generic.go:334] "Generic (PLEG): container finished" podID="0fae05ff-a7cc-491e-bdba-5339c24f6dd5" containerID="556959b5d927b0fd5566855488c1a1c2b84dd33bf82f28e866b38c946eb004ca" exitCode=0 Feb 16 21:15:22 crc kubenswrapper[4805]: I0216 21:15:22.889058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ntwbd-config-rpvvg" event={"ID":"0fae05ff-a7cc-491e-bdba-5339c24f6dd5","Type":"ContainerDied","Data":"556959b5d927b0fd5566855488c1a1c2b84dd33bf82f28e866b38c946eb004ca"} Feb 16 21:15:22 crc kubenswrapper[4805]: I0216 21:15:22.889083 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ntwbd-config-rpvvg" event={"ID":"0fae05ff-a7cc-491e-bdba-5339c24f6dd5","Type":"ContainerStarted","Data":"372a8015068efcb5913e52f00b111961d1d76d253b2c29f17345652d23891b32"} Feb 16 21:15:22 crc kubenswrapper[4805]: I0216 21:15:22.920820 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6rt5d" podStartSLOduration=3.528509787 podStartE2EDuration="16.920802945s" podCreationTimestamp="2026-02-16 21:15:06 +0000 UTC" firstStartedPulling="2026-02-16 21:15:07.853848146 +0000 UTC m=+1125.672531441" lastFinishedPulling="2026-02-16 21:15:21.246141304 +0000 UTC m=+1139.064824599" observedRunningTime="2026-02-16 21:15:22.913107672 +0000 UTC m=+1140.731790967" watchObservedRunningTime="2026-02-16 21:15:22.920802945 +0000 UTC m=+1140.739486240" Feb 16 21:15:23 crc kubenswrapper[4805]: I0216 21:15:23.904156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9","Type":"ContainerStarted","Data":"13d9c9217278264ab7874693bdfa449c8e4cd5d1bf29646fef06cef800c79de9"} Feb 16 21:15:23 crc kubenswrapper[4805]: I0216 21:15:23.923443 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=8.53312113 podStartE2EDuration="9.923427889s" podCreationTimestamp="2026-02-16 21:15:14 +0000 UTC" firstStartedPulling="2026-02-16 21:15:21.725690975 +0000 UTC m=+1139.544374270" lastFinishedPulling="2026-02-16 21:15:23.115997734 +0000 UTC m=+1140.934681029" observedRunningTime="2026-02-16 21:15:23.920834217 +0000 UTC m=+1141.739517512" watchObservedRunningTime="2026-02-16 21:15:23.923427889 +0000 UTC m=+1141.742111174" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.478151 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.484414 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.590685 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w5wg\" (UniqueName: \"kubernetes.io/projected/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-kube-api-access-4w5wg\") pod \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.590812 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjps6\" (UniqueName: \"kubernetes.io/projected/5e172a03-3c54-4817-954f-247328c52578-kube-api-access-hjps6\") pod \"5e172a03-3c54-4817-954f-247328c52578\" (UID: \"5e172a03-3c54-4817-954f-247328c52578\") " Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.590846 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-additional-scripts\") pod \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.590947 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run\") pod \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.590974 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-log-ovn\") pod \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.591030 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run-ovn\") pod \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.591066 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e172a03-3c54-4817-954f-247328c52578-operator-scripts\") pod \"5e172a03-3c54-4817-954f-247328c52578\" (UID: \"5e172a03-3c54-4817-954f-247328c52578\") " Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.591144 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-scripts\") pod \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\" (UID: \"0fae05ff-a7cc-491e-bdba-5339c24f6dd5\") " Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.592848 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0fae05ff-a7cc-491e-bdba-5339c24f6dd5" (UID: "0fae05ff-a7cc-491e-bdba-5339c24f6dd5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.592964 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run" (OuterVolumeSpecName: "var-run") pod "0fae05ff-a7cc-491e-bdba-5339c24f6dd5" (UID: "0fae05ff-a7cc-491e-bdba-5339c24f6dd5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.593054 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0fae05ff-a7cc-491e-bdba-5339c24f6dd5" (UID: "0fae05ff-a7cc-491e-bdba-5339c24f6dd5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.593375 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e172a03-3c54-4817-954f-247328c52578-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e172a03-3c54-4817-954f-247328c52578" (UID: "5e172a03-3c54-4817-954f-247328c52578"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.593384 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0fae05ff-a7cc-491e-bdba-5339c24f6dd5" (UID: "0fae05ff-a7cc-491e-bdba-5339c24f6dd5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.593875 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-scripts" (OuterVolumeSpecName: "scripts") pod "0fae05ff-a7cc-491e-bdba-5339c24f6dd5" (UID: "0fae05ff-a7cc-491e-bdba-5339c24f6dd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.597048 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e172a03-3c54-4817-954f-247328c52578-kube-api-access-hjps6" (OuterVolumeSpecName: "kube-api-access-hjps6") pod "5e172a03-3c54-4817-954f-247328c52578" (UID: "5e172a03-3c54-4817-954f-247328c52578"). InnerVolumeSpecName "kube-api-access-hjps6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.598711 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-kube-api-access-4w5wg" (OuterVolumeSpecName: "kube-api-access-4w5wg") pod "0fae05ff-a7cc-491e-bdba-5339c24f6dd5" (UID: "0fae05ff-a7cc-491e-bdba-5339c24f6dd5"). InnerVolumeSpecName "kube-api-access-4w5wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.692992 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjps6\" (UniqueName: \"kubernetes.io/projected/5e172a03-3c54-4817-954f-247328c52578-kube-api-access-hjps6\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.693390 4805 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.693467 4805 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.693519 4805 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.693574 4805 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.693622 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e172a03-3c54-4817-954f-247328c52578-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.693672 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.693758 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w5wg\" (UniqueName: \"kubernetes.io/projected/0fae05ff-a7cc-491e-bdba-5339c24f6dd5-kube-api-access-4w5wg\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.916974 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ntwbd-config-rpvvg" event={"ID":"0fae05ff-a7cc-491e-bdba-5339c24f6dd5","Type":"ContainerDied","Data":"372a8015068efcb5913e52f00b111961d1d76d253b2c29f17345652d23891b32"} Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.917020 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="372a8015068efcb5913e52f00b111961d1d76d253b2c29f17345652d23891b32" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.917029 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd-config-rpvvg" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.919761 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q2dfh" Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.919785 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q2dfh" event={"ID":"5e172a03-3c54-4817-954f-247328c52578","Type":"ContainerDied","Data":"d91efa847702265f8e075895d187494e2bb88ab64482b3fbad035b3855664cf0"} Feb 16 21:15:24 crc kubenswrapper[4805]: I0216 21:15:24.919862 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91efa847702265f8e075895d187494e2bb88ab64482b3fbad035b3855664cf0" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.486909 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.616435 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ntwbd-config-rpvvg"] Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.624695 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ntwbd-config-rpvvg"] Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.678668 4805 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podbd3976a1-6498-480d-a4d9-ebec8797c16d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podbd3976a1-6498-480d-a4d9-ebec8797c16d] : Timed out while waiting for systemd to remove kubepods-besteffort-podbd3976a1_6498_480d_a4d9_ebec8797c16d.slice" Feb 16 21:15:25 crc kubenswrapper[4805]: E0216 21:15:25.678730 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podbd3976a1-6498-480d-a4d9-ebec8797c16d] : unable to destroy cgroup paths for cgroup [kubepods besteffort podbd3976a1-6498-480d-a4d9-ebec8797c16d] : Timed out while waiting for systemd to remove kubepods-besteffort-podbd3976a1_6498_480d_a4d9_ebec8797c16d.slice" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" podUID="bd3976a1-6498-480d-a4d9-ebec8797c16d" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.684145 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ntwbd-config-qwb4b"] Feb 16 21:15:25 crc kubenswrapper[4805]: E0216 21:15:25.684629 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" containerName="swift-ring-rebalance" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.684651 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" containerName="swift-ring-rebalance" Feb 16 21:15:25 crc kubenswrapper[4805]: E0216 21:15:25.684677 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e172a03-3c54-4817-954f-247328c52578" containerName="mariadb-account-create-update" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.684686 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e172a03-3c54-4817-954f-247328c52578" containerName="mariadb-account-create-update" Feb 16 21:15:25 crc kubenswrapper[4805]: E0216 21:15:25.684706 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fae05ff-a7cc-491e-bdba-5339c24f6dd5" containerName="ovn-config" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.684714 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fae05ff-a7cc-491e-bdba-5339c24f6dd5" containerName="ovn-config" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.684941 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f409f3b-50d6-47b1-9abb-e90ba2cc03ab" containerName="swift-ring-rebalance" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.684976 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fae05ff-a7cc-491e-bdba-5339c24f6dd5" containerName="ovn-config" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.684996 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e172a03-3c54-4817-954f-247328c52578" containerName="mariadb-account-create-update" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.685766 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.692163 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.698994 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ntwbd-config-qwb4b"] Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.720288 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run-ovn\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.720388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwb8k\" (UniqueName: \"kubernetes.io/projected/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-kube-api-access-cwb8k\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.720460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.720510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-scripts\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.720538 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-additional-scripts\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.720601 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-log-ovn\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.821743 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-log-ovn\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.822139 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run-ovn\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.822205 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwb8k\" (UniqueName: \"kubernetes.io/projected/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-kube-api-access-cwb8k\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.822256 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.822289 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-scripts\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.822311 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-additional-scripts\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.822073 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-log-ovn\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.822845 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run-ovn\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.822909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.822999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-additional-scripts\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.824837 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-scripts\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.847173 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwb8k\" (UniqueName: \"kubernetes.io/projected/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-kube-api-access-cwb8k\") pod \"ovn-controller-ntwbd-config-qwb4b\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.928437 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ljkcf" Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.976708 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ljkcf"] Feb 16 21:15:25 crc kubenswrapper[4805]: I0216 21:15:25.985896 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ljkcf"] Feb 16 21:15:26 crc kubenswrapper[4805]: I0216 21:15:26.008214 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:26 crc kubenswrapper[4805]: W0216 21:15:26.562279 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf89aaca1_b877_4a7b_bc42_a932eccc0fd0.slice/crio-e690fefb74bca37f96b88c0992ed85b290bd1d5bbe5bae95074ecd07618a5c8b WatchSource:0}: Error finding container e690fefb74bca37f96b88c0992ed85b290bd1d5bbe5bae95074ecd07618a5c8b: Status 404 returned error can't find the container with id e690fefb74bca37f96b88c0992ed85b290bd1d5bbe5bae95074ecd07618a5c8b Feb 16 21:15:26 crc kubenswrapper[4805]: I0216 21:15:26.563425 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ntwbd-config-qwb4b"] Feb 16 21:15:26 crc kubenswrapper[4805]: I0216 21:15:26.945512 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:15:26 crc kubenswrapper[4805]: I0216 21:15:26.947981 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ntwbd-config-qwb4b" event={"ID":"f89aaca1-b877-4a7b-bc42-a932eccc0fd0","Type":"ContainerStarted","Data":"d911435d05729cc45fc9c80b1c823d7807379afaf4a6f2bf520d77cca9764278"} Feb 16 21:15:26 crc kubenswrapper[4805]: I0216 21:15:26.948031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ntwbd-config-qwb4b" event={"ID":"f89aaca1-b877-4a7b-bc42-a932eccc0fd0","Type":"ContainerStarted","Data":"e690fefb74bca37f96b88c0992ed85b290bd1d5bbe5bae95074ecd07618a5c8b"} Feb 16 21:15:26 crc kubenswrapper[4805]: I0216 21:15:26.957282 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b51bad1e-99c6-4e2b-ae2b-c7e338ef235e-etc-swift\") pod \"swift-storage-0\" (UID: \"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e\") " pod="openstack/swift-storage-0" Feb 16 21:15:26 crc kubenswrapper[4805]: I0216 21:15:26.972535 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ntwbd-config-qwb4b" podStartSLOduration=1.97250945 podStartE2EDuration="1.97250945s" podCreationTimestamp="2026-02-16 21:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:26.970353551 +0000 UTC m=+1144.789036846" watchObservedRunningTime="2026-02-16 21:15:26.97250945 +0000 UTC m=+1144.791192735" Feb 16 21:15:27 crc kubenswrapper[4805]: I0216 21:15:27.205957 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 21:15:27 crc kubenswrapper[4805]: I0216 21:15:27.612058 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fae05ff-a7cc-491e-bdba-5339c24f6dd5" path="/var/lib/kubelet/pods/0fae05ff-a7cc-491e-bdba-5339c24f6dd5/volumes" Feb 16 21:15:27 crc kubenswrapper[4805]: I0216 21:15:27.613129 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3976a1-6498-480d-a4d9-ebec8797c16d" path="/var/lib/kubelet/pods/bd3976a1-6498-480d-a4d9-ebec8797c16d/volumes" Feb 16 21:15:27 crc kubenswrapper[4805]: W0216 21:15:27.796173 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb51bad1e_99c6_4e2b_ae2b_c7e338ef235e.slice/crio-30645e368c63d92d0778e865c6b47dc27466189a52a10870a08264c9671bb4c6 WatchSource:0}: Error finding container 30645e368c63d92d0778e865c6b47dc27466189a52a10870a08264c9671bb4c6: Status 404 returned error can't find the container with id 30645e368c63d92d0778e865c6b47dc27466189a52a10870a08264c9671bb4c6 Feb 16 21:15:27 crc kubenswrapper[4805]: I0216 21:15:27.801324 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:15:27 crc kubenswrapper[4805]: I0216 21:15:27.811701 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:15:27 crc kubenswrapper[4805]: I0216 21:15:27.959205 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"30645e368c63d92d0778e865c6b47dc27466189a52a10870a08264c9671bb4c6"} Feb 16 21:15:27 crc kubenswrapper[4805]: I0216 21:15:27.960949 4805 generic.go:334] "Generic (PLEG): container finished" podID="f89aaca1-b877-4a7b-bc42-a932eccc0fd0" containerID="d911435d05729cc45fc9c80b1c823d7807379afaf4a6f2bf520d77cca9764278" exitCode=0 Feb 16 21:15:27 crc kubenswrapper[4805]: I0216 21:15:27.960989 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ntwbd-config-qwb4b" event={"ID":"f89aaca1-b877-4a7b-bc42-a932eccc0fd0","Type":"ContainerDied","Data":"d911435d05729cc45fc9c80b1c823d7807379afaf4a6f2bf520d77cca9764278"} Feb 16 21:15:28 crc kubenswrapper[4805]: I0216 21:15:28.357667 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a48053f-4668-43af-bda4-7af014d6457d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 16 21:15:28 crc kubenswrapper[4805]: I0216 21:15:28.368748 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="95a93760-333e-4689-a64c-c3534a04cec0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 16 21:15:28 crc kubenswrapper[4805]: I0216 21:15:28.383950 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 16 21:15:28 crc kubenswrapper[4805]: I0216 21:15:28.630068 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.399846 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596050 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-log-ovn\") pod \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run\") pod \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-scripts\") pod \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596203 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f89aaca1-b877-4a7b-bc42-a932eccc0fd0" (UID: "f89aaca1-b877-4a7b-bc42-a932eccc0fd0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596272 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-additional-scripts\") pod \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596310 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run" (OuterVolumeSpecName: "var-run") pod "f89aaca1-b877-4a7b-bc42-a932eccc0fd0" (UID: "f89aaca1-b877-4a7b-bc42-a932eccc0fd0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596411 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwb8k\" (UniqueName: \"kubernetes.io/projected/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-kube-api-access-cwb8k\") pod \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596444 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run-ovn\") pod \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\" (UID: \"f89aaca1-b877-4a7b-bc42-a932eccc0fd0\") " Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596653 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f89aaca1-b877-4a7b-bc42-a932eccc0fd0" (UID: "f89aaca1-b877-4a7b-bc42-a932eccc0fd0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596960 4805 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596974 4805 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.596982 4805 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.597332 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f89aaca1-b877-4a7b-bc42-a932eccc0fd0" (UID: "f89aaca1-b877-4a7b-bc42-a932eccc0fd0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.598290 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-scripts" (OuterVolumeSpecName: "scripts") pod "f89aaca1-b877-4a7b-bc42-a932eccc0fd0" (UID: "f89aaca1-b877-4a7b-bc42-a932eccc0fd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.674614 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ntwbd-config-qwb4b"] Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.677356 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ntwbd-config-qwb4b"] Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.698859 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:29 crc kubenswrapper[4805]: I0216 21:15:29.698890 4805 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:30 crc kubenswrapper[4805]: I0216 21:15:30.440036 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-kube-api-access-cwb8k" (OuterVolumeSpecName: "kube-api-access-cwb8k") pod "f89aaca1-b877-4a7b-bc42-a932eccc0fd0" (UID: "f89aaca1-b877-4a7b-bc42-a932eccc0fd0"). InnerVolumeSpecName "kube-api-access-cwb8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:30 crc kubenswrapper[4805]: I0216 21:15:30.466438 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e690fefb74bca37f96b88c0992ed85b290bd1d5bbe5bae95074ecd07618a5c8b" Feb 16 21:15:30 crc kubenswrapper[4805]: I0216 21:15:30.466569 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ntwbd-config-qwb4b" Feb 16 21:15:30 crc kubenswrapper[4805]: I0216 21:15:30.486902 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:30 crc kubenswrapper[4805]: I0216 21:15:30.490660 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:30 crc kubenswrapper[4805]: I0216 21:15:30.515104 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwb8k\" (UniqueName: \"kubernetes.io/projected/f89aaca1-b877-4a7b-bc42-a932eccc0fd0-kube-api-access-cwb8k\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:31 crc kubenswrapper[4805]: I0216 21:15:31.492852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"90a1124833daad9a42a20f8c411eae4e0da2784e72979bcb94b5015dfbee2dae"} Feb 16 21:15:31 crc kubenswrapper[4805]: I0216 21:15:31.493518 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"1c7a4960d1c9cc1cfd1a16c9d60226bcb8ee6d1f19156e8cd38796cc87e07b91"} Feb 16 21:15:31 crc kubenswrapper[4805]: I0216 21:15:31.493535 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"d60395e1cfbad315d88197fbb24e6b19d4aaf6f9af575f34ee67fbbac37eb1f2"} Feb 16 21:15:31 crc kubenswrapper[4805]: I0216 21:15:31.493545 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"c9583b553590d361dcd1effb054e15bf100a4e37ea5c364092364972ac2c3a1a"} Feb 16 21:15:31 crc kubenswrapper[4805]: I0216 21:15:31.496102 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:31 crc kubenswrapper[4805]: I0216 21:15:31.619887 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89aaca1-b877-4a7b-bc42-a932eccc0fd0" path="/var/lib/kubelet/pods/f89aaca1-b877-4a7b-bc42-a932eccc0fd0/volumes" Feb 16 21:15:32 crc kubenswrapper[4805]: I0216 21:15:32.505111 4805 generic.go:334] "Generic (PLEG): container finished" podID="b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8" containerID="3706d4fbe325e1fe22de6809fa8f9cfa06446baaedc6cab54ab55e2e12834f45" exitCode=0 Feb 16 21:15:32 crc kubenswrapper[4805]: I0216 21:15:32.505613 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6rt5d" event={"ID":"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8","Type":"ContainerDied","Data":"3706d4fbe325e1fe22de6809fa8f9cfa06446baaedc6cab54ab55e2e12834f45"} Feb 16 21:15:33 crc kubenswrapper[4805]: I0216 21:15:33.518215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"599a5edfceb517f6cdd4a03425e54867ed76977f4fea51f37fcc4ffccd03aee6"} Feb 16 21:15:33 crc kubenswrapper[4805]: I0216 21:15:33.520368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"2af1604f132785c0e0c9447c70aad28212e52071c30177f6b97acb800e932274"} Feb 16 21:15:33 crc kubenswrapper[4805]: I0216 21:15:33.520444 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"94b277fee0c26fc5d682c78fd83c8759a608a9b33f39885155a0620ee3da0cbe"} Feb 16 21:15:33 crc kubenswrapper[4805]: I0216 21:15:33.520537 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"2379664548ad8a0f3f3c8c1da771738e31e99cf1d5a8c7f85ae348d666e14d55"} Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.056163 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.060036 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="prometheus" containerID="cri-o://63b5c23bfc15b4e8f364513e1eae33a8da3477a30592740fa648b615d34934f7" gracePeriod=600 Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.066287 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="config-reloader" containerID="cri-o://677139dfb2585d763bd4a1cfafc5c9f8702da0d814e269e88e41e2f748be9181" gracePeriod=600 Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.066440 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="thanos-sidecar" containerID="cri-o://7dacd078f3b71b47858926f5f4d988803ee3f912be9ea9852307a88e80d73e22" gracePeriod=600 Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.114276 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.244103 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-config-data\") pod \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.244384 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-db-sync-config-data\") pod \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.244475 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-combined-ca-bundle\") pod \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.244543 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-954fm\" (UniqueName: \"kubernetes.io/projected/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-kube-api-access-954fm\") pod \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\" (UID: \"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8\") " Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.252208 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8" (UID: "b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.252525 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-kube-api-access-954fm" (OuterVolumeSpecName: "kube-api-access-954fm") pod "b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8" (UID: "b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8"). InnerVolumeSpecName "kube-api-access-954fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.294035 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8" (UID: "b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.346538 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.346580 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-954fm\" (UniqueName: \"kubernetes.io/projected/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-kube-api-access-954fm\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.346596 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.404740 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-config-data" (OuterVolumeSpecName: "config-data") pod "b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8" (UID: "b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.448827 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.530313 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6rt5d" event={"ID":"b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8","Type":"ContainerDied","Data":"16318824fe45e7ba0016fb93c6e415e897e9b3a9228b6a551c30c6c8de046c59"} Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.530872 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16318824fe45e7ba0016fb93c6e415e897e9b3a9228b6a551c30c6c8de046c59" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.530432 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6rt5d" Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.533819 4805 generic.go:334] "Generic (PLEG): container finished" podID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerID="7dacd078f3b71b47858926f5f4d988803ee3f912be9ea9852307a88e80d73e22" exitCode=0 Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.533859 4805 generic.go:334] "Generic (PLEG): container finished" podID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerID="677139dfb2585d763bd4a1cfafc5c9f8702da0d814e269e88e41e2f748be9181" exitCode=0 Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.533872 4805 generic.go:334] "Generic (PLEG): container finished" podID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerID="63b5c23bfc15b4e8f364513e1eae33a8da3477a30592740fa648b615d34934f7" exitCode=0 Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.533898 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerDied","Data":"7dacd078f3b71b47858926f5f4d988803ee3f912be9ea9852307a88e80d73e22"} Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.533936 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerDied","Data":"677139dfb2585d763bd4a1cfafc5c9f8702da0d814e269e88e41e2f748be9181"} Feb 16 21:15:34 crc kubenswrapper[4805]: I0216 21:15:34.533951 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerDied","Data":"63b5c23bfc15b4e8f364513e1eae33a8da3477a30592740fa648b615d34934f7"} Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.013428 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-thlzl"] Feb 16 21:15:35 crc kubenswrapper[4805]: E0216 21:15:35.014217 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89aaca1-b877-4a7b-bc42-a932eccc0fd0" containerName="ovn-config" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.014240 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89aaca1-b877-4a7b-bc42-a932eccc0fd0" containerName="ovn-config" Feb 16 21:15:35 crc kubenswrapper[4805]: E0216 21:15:35.014306 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8" containerName="glance-db-sync" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.014315 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8" containerName="glance-db-sync" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.014659 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8" containerName="glance-db-sync" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.014689 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89aaca1-b877-4a7b-bc42-a932eccc0fd0" containerName="ovn-config" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.016811 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.041295 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-thlzl"] Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.175197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-config\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.175256 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.175291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-dns-svc\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.175308 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.175366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk266\" (UniqueName: \"kubernetes.io/projected/3a25d8f6-5aff-4095-9624-55a96f9483af-kube-api-access-qk266\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.276971 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-config\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.277021 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.277055 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-dns-svc\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.277071 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.277128 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk266\" (UniqueName: \"kubernetes.io/projected/3a25d8f6-5aff-4095-9624-55a96f9483af-kube-api-access-qk266\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.278274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-config\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.278270 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.278417 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.278998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-dns-svc\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.296385 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk266\" (UniqueName: \"kubernetes.io/projected/3a25d8f6-5aff-4095-9624-55a96f9483af-kube-api-access-qk266\") pod \"dnsmasq-dns-74dc88fc-thlzl\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.312874 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.347352 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483450 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-1\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483557 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-tls-assets\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483574 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-0\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483618 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-2\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483638 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config-out\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483690 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-web-config\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483759 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn5rh\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-kube-api-access-nn5rh\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483906 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.483927 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-thanos-prometheus-http-client-file\") pod \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\" (UID: \"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0\") " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.484520 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.484541 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.484697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.490456 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config-out" (OuterVolumeSpecName: "config-out") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.492571 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config" (OuterVolumeSpecName: "config") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.492858 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-kube-api-access-nn5rh" (OuterVolumeSpecName: "kube-api-access-nn5rh") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "kube-api-access-nn5rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.496812 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.496931 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.529786 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "pvc-e77434f7-c14a-4513-b88b-caaea89911c3". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.544085 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-web-config" (OuterVolumeSpecName: "web-config") pod "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" (UID: "e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.553039 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0","Type":"ContainerDied","Data":"d8c4d866361da01f71ce6a2c664fd126f578bc2249601c1655e57b5782762dac"} Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.553085 4805 scope.go:117] "RemoveContainer" containerID="7dacd078f3b71b47858926f5f4d988803ee3f912be9ea9852307a88e80d73e22" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.553199 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.577899 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"dd83060b040c9624b52ea682009e17ebe7fc824830c86246e04dbfb318ba1a7d"} Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.577956 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"3e8714259f90bf2e970d1dcbe8c9273aba219185f6aa2095ae9208906837eebc"} Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588111 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588158 4805 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588170 4805 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588181 4805 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588192 4805 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588201 4805 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588212 4805 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588223 4805 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588233 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn5rh\" (UniqueName: \"kubernetes.io/projected/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0-kube-api-access-nn5rh\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.588286 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") on node \"crc\" " Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.663559 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.664345 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e77434f7-c14a-4513-b88b-caaea89911c3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3") on node "crc" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.692064 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.711626 4805 scope.go:117] "RemoveContainer" containerID="677139dfb2585d763bd4a1cfafc5c9f8702da0d814e269e88e41e2f748be9181" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.749681 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.758303 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.777756 4805 scope.go:117] "RemoveContainer" containerID="63b5c23bfc15b4e8f364513e1eae33a8da3477a30592740fa648b615d34934f7" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.814657 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:15:35 crc kubenswrapper[4805]: E0216 21:15:35.815293 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="thanos-sidecar" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.815314 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="thanos-sidecar" Feb 16 21:15:35 crc kubenswrapper[4805]: E0216 21:15:35.815346 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="prometheus" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.815352 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="prometheus" Feb 16 21:15:35 crc kubenswrapper[4805]: E0216 21:15:35.815386 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="config-reloader" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.815397 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="config-reloader" Feb 16 21:15:35 crc kubenswrapper[4805]: E0216 21:15:35.815410 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="init-config-reloader" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.815417 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="init-config-reloader" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.815643 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="thanos-sidecar" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.815667 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="prometheus" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.815681 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" containerName="config-reloader" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.817939 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.818272 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.820138 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.834700 4805 scope.go:117] "RemoveContainer" containerID="ad495408810070426871cbc7524fae8858fbd209987c836aa20bd539abee8f91" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.835283 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.835507 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.835663 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.836448 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.836671 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-tx9qq" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.836797 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.837452 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.838161 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.898955 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899027 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899049 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c6769912-8cfc-48b8-b709-5398ca380e38-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899097 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899164 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899185 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c6769912-8cfc-48b8-b709-5398ca380e38-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899204 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899226 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899251 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnwgd\" (UniqueName: \"kubernetes.io/projected/c6769912-8cfc-48b8-b709-5398ca380e38-kube-api-access-cnwgd\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899284 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c6769912-8cfc-48b8-b709-5398ca380e38-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899311 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c6769912-8cfc-48b8-b709-5398ca380e38-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.899334 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c6769912-8cfc-48b8-b709-5398ca380e38-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:35 crc kubenswrapper[4805]: I0216 21:15:35.951435 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-thlzl"] Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c6769912-8cfc-48b8-b709-5398ca380e38-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002136 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c6769912-8cfc-48b8-b709-5398ca380e38-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002282 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c6769912-8cfc-48b8-b709-5398ca380e38-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002346 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c6769912-8cfc-48b8-b709-5398ca380e38-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002496 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002544 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002565 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c6769912-8cfc-48b8-b709-5398ca380e38-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002612 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002639 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnwgd\" (UniqueName: \"kubernetes.io/projected/c6769912-8cfc-48b8-b709-5398ca380e38-kube-api-access-cnwgd\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.002949 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c6769912-8cfc-48b8-b709-5398ca380e38-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.003948 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c6769912-8cfc-48b8-b709-5398ca380e38-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.006872 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c6769912-8cfc-48b8-b709-5398ca380e38-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.009493 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.009613 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/820435a9e07a10f19b33f7de556745380338e31e769cf4b46fae642a65ea8517/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.013577 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.014770 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.016549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.016944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.020429 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.022320 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c6769912-8cfc-48b8-b709-5398ca380e38-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.029339 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c6769912-8cfc-48b8-b709-5398ca380e38-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.029876 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnwgd\" (UniqueName: \"kubernetes.io/projected/c6769912-8cfc-48b8-b709-5398ca380e38-kube-api-access-cnwgd\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.034430 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c6769912-8cfc-48b8-b709-5398ca380e38-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.102501 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e77434f7-c14a-4513-b88b-caaea89911c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e77434f7-c14a-4513-b88b-caaea89911c3\") pod \"prometheus-metric-storage-0\" (UID: \"c6769912-8cfc-48b8-b709-5398ca380e38\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.370026 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.617075 4805 generic.go:334] "Generic (PLEG): container finished" podID="3a25d8f6-5aff-4095-9624-55a96f9483af" containerID="d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e" exitCode=0 Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.617900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" event={"ID":"3a25d8f6-5aff-4095-9624-55a96f9483af","Type":"ContainerDied","Data":"d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e"} Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.617961 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" event={"ID":"3a25d8f6-5aff-4095-9624-55a96f9483af","Type":"ContainerStarted","Data":"8c2127d92266082fe1af069d1413f8fffc76077779d50d24cdfddd4762ece006"} Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.754939 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"2bf2b606be26e5f88c456fc925e959e0a297b831b898a7df593e8edbe562f78c"} Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.755025 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"d4cee59256f9e4e3dc2fe639c7a964c2bc4ad520b04371bcaf6212141fd595e4"} Feb 16 21:15:36 crc kubenswrapper[4805]: I0216 21:15:36.755051 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"93ea19ae9ccad6c166795ddff78d15811fd54d174eed89fbddd2e0b7320c40d3"} Feb 16 21:15:37 crc kubenswrapper[4805]: I0216 21:15:37.013707 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:15:37 crc kubenswrapper[4805]: I0216 21:15:37.620760 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0" path="/var/lib/kubelet/pods/e7b6f56c-de3e-4a7b-9e89-6f2c76fa7bf0/volumes" Feb 16 21:15:37 crc kubenswrapper[4805]: I0216 21:15:37.776766 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6769912-8cfc-48b8-b709-5398ca380e38","Type":"ContainerStarted","Data":"fcf928695f160a9663be5dcfb4d94814c1e2e072ee84406cea7be8dbfad038fb"} Feb 16 21:15:37 crc kubenswrapper[4805]: I0216 21:15:37.778799 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" event={"ID":"3a25d8f6-5aff-4095-9624-55a96f9483af","Type":"ContainerStarted","Data":"71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622"} Feb 16 21:15:37 crc kubenswrapper[4805]: I0216 21:15:37.779036 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:37 crc kubenswrapper[4805]: I0216 21:15:37.783910 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"b49d6c163bbe707edfdd4c9a173356abc361bc0b88348f77006cd74449ff79dc"} Feb 16 21:15:37 crc kubenswrapper[4805]: I0216 21:15:37.783938 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b51bad1e-99c6-4e2b-ae2b-c7e338ef235e","Type":"ContainerStarted","Data":"57bba5a825eae6646724da963a8e8d1ae2a0fc40a2e6259051969ef8cc568c52"} Feb 16 21:15:37 crc kubenswrapper[4805]: I0216 21:15:37.840787 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" podStartSLOduration=3.840767046 podStartE2EDuration="3.840767046s" podCreationTimestamp="2026-02-16 21:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:37.828579629 +0000 UTC m=+1155.647262924" watchObservedRunningTime="2026-02-16 21:15:37.840767046 +0000 UTC m=+1155.659450341" Feb 16 21:15:37 crc kubenswrapper[4805]: I0216 21:15:37.902329 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.850051578 podStartE2EDuration="43.902308145s" podCreationTimestamp="2026-02-16 21:14:54 +0000 UTC" firstStartedPulling="2026-02-16 21:15:27.801003627 +0000 UTC m=+1145.619686942" lastFinishedPulling="2026-02-16 21:15:34.853260214 +0000 UTC m=+1152.671943509" observedRunningTime="2026-02-16 21:15:37.890089587 +0000 UTC m=+1155.708772892" watchObservedRunningTime="2026-02-16 21:15:37.902308145 +0000 UTC m=+1155.720991440" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.220541 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-thlzl"] Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.250551 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-87s5g"] Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.252215 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.254612 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.275506 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-87s5g"] Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.358154 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.362006 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.362118 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.362237 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-config\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.362284 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.362347 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ztlv\" (UniqueName: \"kubernetes.io/projected/b3c33787-c8a9-46fa-b932-cedfe284d377-kube-api-access-6ztlv\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.362410 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.368528 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.385961 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.464657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-config\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.464910 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.465066 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ztlv\" (UniqueName: \"kubernetes.io/projected/b3c33787-c8a9-46fa-b932-cedfe284d377-kube-api-access-6ztlv\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.465428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.465531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.465703 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.465932 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-config\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.466119 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.466578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.467040 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.467202 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.493675 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ztlv\" (UniqueName: \"kubernetes.io/projected/b3c33787-c8a9-46fa-b932-cedfe284d377-kube-api-access-6ztlv\") pod \"dnsmasq-dns-5f59b8f679-87s5g\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:38 crc kubenswrapper[4805]: I0216 21:15:38.576377 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:39 crc kubenswrapper[4805]: I0216 21:15:39.209707 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-87s5g"] Feb 16 21:15:39 crc kubenswrapper[4805]: I0216 21:15:39.831085 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" podUID="3a25d8f6-5aff-4095-9624-55a96f9483af" containerName="dnsmasq-dns" containerID="cri-o://71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622" gracePeriod=10 Feb 16 21:15:39 crc kubenswrapper[4805]: I0216 21:15:39.833134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" event={"ID":"b3c33787-c8a9-46fa-b932-cedfe284d377","Type":"ContainerStarted","Data":"f1709a892c6be6cb72c430684a69a3091687e107139b4f49a46c3fb4744ca95e"} Feb 16 21:15:39 crc kubenswrapper[4805]: I0216 21:15:39.833176 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" event={"ID":"b3c33787-c8a9-46fa-b932-cedfe284d377","Type":"ContainerStarted","Data":"90188d774ba2f596a6f8365f7e4c2804532fd216d144f7983a9f0cfe848706f2"} Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.484157 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.641226 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-dns-svc\") pod \"3a25d8f6-5aff-4095-9624-55a96f9483af\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.641274 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-config\") pod \"3a25d8f6-5aff-4095-9624-55a96f9483af\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.641324 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk266\" (UniqueName: \"kubernetes.io/projected/3a25d8f6-5aff-4095-9624-55a96f9483af-kube-api-access-qk266\") pod \"3a25d8f6-5aff-4095-9624-55a96f9483af\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.641513 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-sb\") pod \"3a25d8f6-5aff-4095-9624-55a96f9483af\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.641595 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-nb\") pod \"3a25d8f6-5aff-4095-9624-55a96f9483af\" (UID: \"3a25d8f6-5aff-4095-9624-55a96f9483af\") " Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.656639 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a25d8f6-5aff-4095-9624-55a96f9483af-kube-api-access-qk266" (OuterVolumeSpecName: "kube-api-access-qk266") pod "3a25d8f6-5aff-4095-9624-55a96f9483af" (UID: "3a25d8f6-5aff-4095-9624-55a96f9483af"). InnerVolumeSpecName "kube-api-access-qk266". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.737650 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a25d8f6-5aff-4095-9624-55a96f9483af" (UID: "3a25d8f6-5aff-4095-9624-55a96f9483af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.747532 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk266\" (UniqueName: \"kubernetes.io/projected/3a25d8f6-5aff-4095-9624-55a96f9483af-kube-api-access-qk266\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.747579 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.778137 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-config" (OuterVolumeSpecName: "config") pod "3a25d8f6-5aff-4095-9624-55a96f9483af" (UID: "3a25d8f6-5aff-4095-9624-55a96f9483af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.778213 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a25d8f6-5aff-4095-9624-55a96f9483af" (UID: "3a25d8f6-5aff-4095-9624-55a96f9483af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.778468 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a25d8f6-5aff-4095-9624-55a96f9483af" (UID: "3a25d8f6-5aff-4095-9624-55a96f9483af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.800612 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-49c2p"] Feb 16 21:15:40 crc kubenswrapper[4805]: E0216 21:15:40.801054 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a25d8f6-5aff-4095-9624-55a96f9483af" containerName="dnsmasq-dns" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.801071 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a25d8f6-5aff-4095-9624-55a96f9483af" containerName="dnsmasq-dns" Feb 16 21:15:40 crc kubenswrapper[4805]: E0216 21:15:40.801105 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a25d8f6-5aff-4095-9624-55a96f9483af" containerName="init" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.801111 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a25d8f6-5aff-4095-9624-55a96f9483af" containerName="init" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.801299 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a25d8f6-5aff-4095-9624-55a96f9483af" containerName="dnsmasq-dns" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.808497 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-49c2p" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.825507 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-49c2p"] Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.860704 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-operator-scripts\") pod \"heat-db-create-49c2p\" (UID: \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\") " pod="openstack/heat-db-create-49c2p" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.861043 4805 generic.go:334] "Generic (PLEG): container finished" podID="3a25d8f6-5aff-4095-9624-55a96f9483af" containerID="71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622" exitCode=0 Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.861083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flhx2\" (UniqueName: \"kubernetes.io/projected/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-kube-api-access-flhx2\") pod \"heat-db-create-49c2p\" (UID: \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\") " pod="openstack/heat-db-create-49c2p" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.861117 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" event={"ID":"3a25d8f6-5aff-4095-9624-55a96f9483af","Type":"ContainerDied","Data":"71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622"} Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.861142 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" event={"ID":"3a25d8f6-5aff-4095-9624-55a96f9483af","Type":"ContainerDied","Data":"8c2127d92266082fe1af069d1413f8fffc76077779d50d24cdfddd4762ece006"} Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.861158 4805 scope.go:117] "RemoveContainer" containerID="71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.861275 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-thlzl" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.861526 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.861841 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.861859 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a25d8f6-5aff-4095-9624-55a96f9483af-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.872313 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6769912-8cfc-48b8-b709-5398ca380e38","Type":"ContainerStarted","Data":"84b5052f6b1d46e1a9072fea4950033162181a3fc15e5f9510c0132ff442ebb9"} Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.891512 4805 generic.go:334] "Generic (PLEG): container finished" podID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerID="f1709a892c6be6cb72c430684a69a3091687e107139b4f49a46c3fb4744ca95e" exitCode=0 Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.891559 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" event={"ID":"b3c33787-c8a9-46fa-b932-cedfe284d377","Type":"ContainerDied","Data":"f1709a892c6be6cb72c430684a69a3091687e107139b4f49a46c3fb4744ca95e"} Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.899695 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-65ee-account-create-update-ktsbv"] Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.903256 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.905866 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.918037 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-65ee-account-create-update-ktsbv"] Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.924083 4805 scope.go:117] "RemoveContainer" containerID="d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.946333 4805 scope.go:117] "RemoveContainer" containerID="71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622" Feb 16 21:15:40 crc kubenswrapper[4805]: E0216 21:15:40.946958 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622\": container with ID starting with 71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622 not found: ID does not exist" containerID="71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.946986 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622"} err="failed to get container status \"71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622\": rpc error: code = NotFound desc = could not find container \"71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622\": container with ID starting with 71c33e3d4fe4e1800120d92c587caf2c5ea32925cbd078e79db1aff8f2a0f622 not found: ID does not exist" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.947007 4805 scope.go:117] "RemoveContainer" containerID="d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e" Feb 16 21:15:40 crc kubenswrapper[4805]: E0216 21:15:40.947559 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e\": container with ID starting with d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e not found: ID does not exist" containerID="d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.947594 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e"} err="failed to get container status \"d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e\": rpc error: code = NotFound desc = could not find container \"d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e\": container with ID starting with d30e3f2474316b7256fdc1c01c71b706777949ca6f15f8191e642365ea24b44e not found: ID does not exist" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.964683 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t724c\" (UniqueName: \"kubernetes.io/projected/49902100-6d13-4aa5-9e40-fd76424f5dd4-kube-api-access-t724c\") pod \"heat-65ee-account-create-update-ktsbv\" (UID: \"49902100-6d13-4aa5-9e40-fd76424f5dd4\") " pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.964796 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flhx2\" (UniqueName: \"kubernetes.io/projected/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-kube-api-access-flhx2\") pod \"heat-db-create-49c2p\" (UID: \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\") " pod="openstack/heat-db-create-49c2p" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.965031 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49902100-6d13-4aa5-9e40-fd76424f5dd4-operator-scripts\") pod \"heat-65ee-account-create-update-ktsbv\" (UID: \"49902100-6d13-4aa5-9e40-fd76424f5dd4\") " pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.965065 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-operator-scripts\") pod \"heat-db-create-49c2p\" (UID: \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\") " pod="openstack/heat-db-create-49c2p" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.970144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-operator-scripts\") pod \"heat-db-create-49c2p\" (UID: \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\") " pod="openstack/heat-db-create-49c2p" Feb 16 21:15:40 crc kubenswrapper[4805]: I0216 21:15:40.997688 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flhx2\" (UniqueName: \"kubernetes.io/projected/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-kube-api-access-flhx2\") pod \"heat-db-create-49c2p\" (UID: \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\") " pod="openstack/heat-db-create-49c2p" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.013895 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-thlzl"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.054890 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-thlzl"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.069986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t724c\" (UniqueName: \"kubernetes.io/projected/49902100-6d13-4aa5-9e40-fd76424f5dd4-kube-api-access-t724c\") pod \"heat-65ee-account-create-update-ktsbv\" (UID: \"49902100-6d13-4aa5-9e40-fd76424f5dd4\") " pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.070387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49902100-6d13-4aa5-9e40-fd76424f5dd4-operator-scripts\") pod \"heat-65ee-account-create-update-ktsbv\" (UID: \"49902100-6d13-4aa5-9e40-fd76424f5dd4\") " pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.071268 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49902100-6d13-4aa5-9e40-fd76424f5dd4-operator-scripts\") pod \"heat-65ee-account-create-update-ktsbv\" (UID: \"49902100-6d13-4aa5-9e40-fd76424f5dd4\") " pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.091126 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-n46mw"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.092520 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.093633 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t724c\" (UniqueName: \"kubernetes.io/projected/49902100-6d13-4aa5-9e40-fd76424f5dd4-kube-api-access-t724c\") pod \"heat-65ee-account-create-update-ktsbv\" (UID: \"49902100-6d13-4aa5-9e40-fd76424f5dd4\") " pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.116781 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d98e-account-create-update-t8mk9"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.118278 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.121149 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.128490 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-49c2p" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.198393 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-n46mw"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.238136 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d98e-account-create-update-t8mk9"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.241335 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.280705 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3e5581-5041-48ca-be14-1220df2a86d8-operator-scripts\") pod \"cinder-db-create-n46mw\" (UID: \"0c3e5581-5041-48ca-be14-1220df2a86d8\") " pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.280816 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl287\" (UniqueName: \"kubernetes.io/projected/0c3e5581-5041-48ca-be14-1220df2a86d8-kube-api-access-pl287\") pod \"cinder-db-create-n46mw\" (UID: \"0c3e5581-5041-48ca-be14-1220df2a86d8\") " pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.280851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtthz\" (UniqueName: \"kubernetes.io/projected/449f249d-010e-41e2-8314-9dd16925c7ae-kube-api-access-rtthz\") pod \"cinder-d98e-account-create-update-t8mk9\" (UID: \"449f249d-010e-41e2-8314-9dd16925c7ae\") " pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.280885 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f249d-010e-41e2-8314-9dd16925c7ae-operator-scripts\") pod \"cinder-d98e-account-create-update-t8mk9\" (UID: \"449f249d-010e-41e2-8314-9dd16925c7ae\") " pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.367455 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-shtt5"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.369145 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.373907 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.374168 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.374506 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.375296 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xc6z5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.383796 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-shtt5"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.385398 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3e5581-5041-48ca-be14-1220df2a86d8-operator-scripts\") pod \"cinder-db-create-n46mw\" (UID: \"0c3e5581-5041-48ca-be14-1220df2a86d8\") " pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.385471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl287\" (UniqueName: \"kubernetes.io/projected/0c3e5581-5041-48ca-be14-1220df2a86d8-kube-api-access-pl287\") pod \"cinder-db-create-n46mw\" (UID: \"0c3e5581-5041-48ca-be14-1220df2a86d8\") " pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.385506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtthz\" (UniqueName: \"kubernetes.io/projected/449f249d-010e-41e2-8314-9dd16925c7ae-kube-api-access-rtthz\") pod \"cinder-d98e-account-create-update-t8mk9\" (UID: \"449f249d-010e-41e2-8314-9dd16925c7ae\") " pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.385540 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f249d-010e-41e2-8314-9dd16925c7ae-operator-scripts\") pod \"cinder-d98e-account-create-update-t8mk9\" (UID: \"449f249d-010e-41e2-8314-9dd16925c7ae\") " pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.386740 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f249d-010e-41e2-8314-9dd16925c7ae-operator-scripts\") pod \"cinder-d98e-account-create-update-t8mk9\" (UID: \"449f249d-010e-41e2-8314-9dd16925c7ae\") " pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.387374 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3e5581-5041-48ca-be14-1220df2a86d8-operator-scripts\") pod \"cinder-db-create-n46mw\" (UID: \"0c3e5581-5041-48ca-be14-1220df2a86d8\") " pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.401187 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8zf24"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.406674 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.430627 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0e19-account-create-update-r4lvm"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.432216 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.434920 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.444619 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0e19-account-create-update-r4lvm"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.453659 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtthz\" (UniqueName: \"kubernetes.io/projected/449f249d-010e-41e2-8314-9dd16925c7ae-kube-api-access-rtthz\") pod \"cinder-d98e-account-create-update-t8mk9\" (UID: \"449f249d-010e-41e2-8314-9dd16925c7ae\") " pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.464984 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl287\" (UniqueName: \"kubernetes.io/projected/0c3e5581-5041-48ca-be14-1220df2a86d8-kube-api-access-pl287\") pod \"cinder-db-create-n46mw\" (UID: \"0c3e5581-5041-48ca-be14-1220df2a86d8\") " pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.473152 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8zf24"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.491115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-config-data\") pod \"keystone-db-sync-shtt5\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.491252 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-combined-ca-bundle\") pod \"keystone-db-sync-shtt5\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.491289 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxbzx\" (UniqueName: \"kubernetes.io/projected/4ea5126d-5794-4444-968f-696bee9afc30-kube-api-access-pxbzx\") pod \"keystone-db-sync-shtt5\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.511966 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-s9lw5"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.526224 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.546137 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-05bd-account-create-update-n2bst"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.547534 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.552460 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.591310 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.592858 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944f457d-a34a-4f92-8172-a23175048fad-operator-scripts\") pod \"barbican-0e19-account-create-update-r4lvm\" (UID: \"944f457d-a34a-4f92-8172-a23175048fad\") " pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.592887 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxbzx\" (UniqueName: \"kubernetes.io/projected/4ea5126d-5794-4444-968f-696bee9afc30-kube-api-access-pxbzx\") pod \"keystone-db-sync-shtt5\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.592931 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nb74\" (UniqueName: \"kubernetes.io/projected/944f457d-a34a-4f92-8172-a23175048fad-kube-api-access-5nb74\") pod \"barbican-0e19-account-create-update-r4lvm\" (UID: \"944f457d-a34a-4f92-8172-a23175048fad\") " pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.592957 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rmw5\" (UniqueName: \"kubernetes.io/projected/e8d5b185-9950-4ff2-b56c-278a766f3c02-kube-api-access-6rmw5\") pod \"barbican-db-create-8zf24\" (UID: \"e8d5b185-9950-4ff2-b56c-278a766f3c02\") " pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.593047 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-config-data\") pod \"keystone-db-sync-shtt5\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.593156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-combined-ca-bundle\") pod \"keystone-db-sync-shtt5\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.593177 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d5b185-9950-4ff2-b56c-278a766f3c02-operator-scripts\") pod \"barbican-db-create-8zf24\" (UID: \"e8d5b185-9950-4ff2-b56c-278a766f3c02\") " pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.601781 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-config-data\") pod \"keystone-db-sync-shtt5\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.616611 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-combined-ca-bundle\") pod \"keystone-db-sync-shtt5\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.617481 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxbzx\" (UniqueName: \"kubernetes.io/projected/4ea5126d-5794-4444-968f-696bee9afc30-kube-api-access-pxbzx\") pod \"keystone-db-sync-shtt5\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.647479 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.654470 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a25d8f6-5aff-4095-9624-55a96f9483af" path="/var/lib/kubelet/pods/3a25d8f6-5aff-4095-9624-55a96f9483af/volumes" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.655538 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s9lw5"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.662846 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-05bd-account-create-update-n2bst"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.694535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d5b185-9950-4ff2-b56c-278a766f3c02-operator-scripts\") pod \"barbican-db-create-8zf24\" (UID: \"e8d5b185-9950-4ff2-b56c-278a766f3c02\") " pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.694589 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944f457d-a34a-4f92-8172-a23175048fad-operator-scripts\") pod \"barbican-0e19-account-create-update-r4lvm\" (UID: \"944f457d-a34a-4f92-8172-a23175048fad\") " pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.694622 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9wjs\" (UniqueName: \"kubernetes.io/projected/70e49f50-c6fb-46b5-89ae-aa379290cc57-kube-api-access-h9wjs\") pod \"neutron-05bd-account-create-update-n2bst\" (UID: \"70e49f50-c6fb-46b5-89ae-aa379290cc57\") " pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.694650 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nb74\" (UniqueName: \"kubernetes.io/projected/944f457d-a34a-4f92-8172-a23175048fad-kube-api-access-5nb74\") pod \"barbican-0e19-account-create-update-r4lvm\" (UID: \"944f457d-a34a-4f92-8172-a23175048fad\") " pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.694688 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rmw5\" (UniqueName: \"kubernetes.io/projected/e8d5b185-9950-4ff2-b56c-278a766f3c02-kube-api-access-6rmw5\") pod \"barbican-db-create-8zf24\" (UID: \"e8d5b185-9950-4ff2-b56c-278a766f3c02\") " pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.694765 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9796afa-7a50-49c7-b85b-8e3075f92596-operator-scripts\") pod \"neutron-db-create-s9lw5\" (UID: \"c9796afa-7a50-49c7-b85b-8e3075f92596\") " pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.694793 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70e49f50-c6fb-46b5-89ae-aa379290cc57-operator-scripts\") pod \"neutron-05bd-account-create-update-n2bst\" (UID: \"70e49f50-c6fb-46b5-89ae-aa379290cc57\") " pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.694830 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7snb\" (UniqueName: \"kubernetes.io/projected/c9796afa-7a50-49c7-b85b-8e3075f92596-kube-api-access-j7snb\") pod \"neutron-db-create-s9lw5\" (UID: \"c9796afa-7a50-49c7-b85b-8e3075f92596\") " pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.695421 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d5b185-9950-4ff2-b56c-278a766f3c02-operator-scripts\") pod \"barbican-db-create-8zf24\" (UID: \"e8d5b185-9950-4ff2-b56c-278a766f3c02\") " pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.695535 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944f457d-a34a-4f92-8172-a23175048fad-operator-scripts\") pod \"barbican-0e19-account-create-update-r4lvm\" (UID: \"944f457d-a34a-4f92-8172-a23175048fad\") " pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.714923 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.715098 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rmw5\" (UniqueName: \"kubernetes.io/projected/e8d5b185-9950-4ff2-b56c-278a766f3c02-kube-api-access-6rmw5\") pod \"barbican-db-create-8zf24\" (UID: \"e8d5b185-9950-4ff2-b56c-278a766f3c02\") " pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.715323 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nb74\" (UniqueName: \"kubernetes.io/projected/944f457d-a34a-4f92-8172-a23175048fad-kube-api-access-5nb74\") pod \"barbican-0e19-account-create-update-r4lvm\" (UID: \"944f457d-a34a-4f92-8172-a23175048fad\") " pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.797134 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9796afa-7a50-49c7-b85b-8e3075f92596-operator-scripts\") pod \"neutron-db-create-s9lw5\" (UID: \"c9796afa-7a50-49c7-b85b-8e3075f92596\") " pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.797213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70e49f50-c6fb-46b5-89ae-aa379290cc57-operator-scripts\") pod \"neutron-05bd-account-create-update-n2bst\" (UID: \"70e49f50-c6fb-46b5-89ae-aa379290cc57\") " pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.797255 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7snb\" (UniqueName: \"kubernetes.io/projected/c9796afa-7a50-49c7-b85b-8e3075f92596-kube-api-access-j7snb\") pod \"neutron-db-create-s9lw5\" (UID: \"c9796afa-7a50-49c7-b85b-8e3075f92596\") " pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.797381 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9wjs\" (UniqueName: \"kubernetes.io/projected/70e49f50-c6fb-46b5-89ae-aa379290cc57-kube-api-access-h9wjs\") pod \"neutron-05bd-account-create-update-n2bst\" (UID: \"70e49f50-c6fb-46b5-89ae-aa379290cc57\") " pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.797903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9796afa-7a50-49c7-b85b-8e3075f92596-operator-scripts\") pod \"neutron-db-create-s9lw5\" (UID: \"c9796afa-7a50-49c7-b85b-8e3075f92596\") " pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.801352 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.801918 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70e49f50-c6fb-46b5-89ae-aa379290cc57-operator-scripts\") pod \"neutron-05bd-account-create-update-n2bst\" (UID: \"70e49f50-c6fb-46b5-89ae-aa379290cc57\") " pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.819850 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7snb\" (UniqueName: \"kubernetes.io/projected/c9796afa-7a50-49c7-b85b-8e3075f92596-kube-api-access-j7snb\") pod \"neutron-db-create-s9lw5\" (UID: \"c9796afa-7a50-49c7-b85b-8e3075f92596\") " pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.820377 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9wjs\" (UniqueName: \"kubernetes.io/projected/70e49f50-c6fb-46b5-89ae-aa379290cc57-kube-api-access-h9wjs\") pod \"neutron-05bd-account-create-update-n2bst\" (UID: \"70e49f50-c6fb-46b5-89ae-aa379290cc57\") " pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.834926 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.864202 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.900472 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.938381 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" event={"ID":"b3c33787-c8a9-46fa-b932-cedfe284d377","Type":"ContainerStarted","Data":"5f89293ae1efb0c2dd0ceae3c7f8e96b41213720ec9e330d4f11df65222d7ac7"} Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.939856 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.955541 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-65ee-account-create-update-ktsbv"] Feb 16 21:15:41 crc kubenswrapper[4805]: I0216 21:15:41.978345 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" podStartSLOduration=3.978327642 podStartE2EDuration="3.978327642s" podCreationTimestamp="2026-02-16 21:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:41.963668247 +0000 UTC m=+1159.782351532" watchObservedRunningTime="2026-02-16 21:15:41.978327642 +0000 UTC m=+1159.797010937" Feb 16 21:15:42 crc kubenswrapper[4805]: I0216 21:15:42.089061 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-49c2p"] Feb 16 21:15:42 crc kubenswrapper[4805]: I0216 21:15:42.264013 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d98e-account-create-update-t8mk9"] Feb 16 21:15:42 crc kubenswrapper[4805]: I0216 21:15:42.294877 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-n46mw"] Feb 16 21:15:42 crc kubenswrapper[4805]: I0216 21:15:42.519469 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-shtt5"] Feb 16 21:15:42 crc kubenswrapper[4805]: I0216 21:15:42.539221 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8zf24"] Feb 16 21:15:42 crc kubenswrapper[4805]: I0216 21:15:42.712874 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0e19-account-create-update-r4lvm"] Feb 16 21:15:42 crc kubenswrapper[4805]: I0216 21:15:42.747080 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s9lw5"] Feb 16 21:15:42 crc kubenswrapper[4805]: I0216 21:15:42.768628 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-05bd-account-create-update-n2bst"] Feb 16 21:15:42 crc kubenswrapper[4805]: W0216 21:15:42.794288 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9796afa_7a50_49c7_b85b_8e3075f92596.slice/crio-501eedea69c2f9c088b70d982f8179166c76c39a4356340f241c88512671eb96 WatchSource:0}: Error finding container 501eedea69c2f9c088b70d982f8179166c76c39a4356340f241c88512671eb96: Status 404 returned error can't find the container with id 501eedea69c2f9c088b70d982f8179166c76c39a4356340f241c88512671eb96 Feb 16 21:15:43 crc kubenswrapper[4805]: W0216 21:15:43.003521 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70e49f50_c6fb_46b5_89ae_aa379290cc57.slice/crio-a641dc62967fbc2b530fdf8a7c77a1f81fc1985edfdb282d21f0147f61560270 WatchSource:0}: Error finding container a641dc62967fbc2b530fdf8a7c77a1f81fc1985edfdb282d21f0147f61560270: Status 404 returned error can't find the container with id a641dc62967fbc2b530fdf8a7c77a1f81fc1985edfdb282d21f0147f61560270 Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.009956 4805 generic.go:334] "Generic (PLEG): container finished" podID="49902100-6d13-4aa5-9e40-fd76424f5dd4" containerID="e7ffdc4eac9f43b392b1a53d3bb8dad0dacfe134d4ac7a56c3efc3b6a9b09932" exitCode=0 Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.010076 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-65ee-account-create-update-ktsbv" event={"ID":"49902100-6d13-4aa5-9e40-fd76424f5dd4","Type":"ContainerDied","Data":"e7ffdc4eac9f43b392b1a53d3bb8dad0dacfe134d4ac7a56c3efc3b6a9b09932"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.010128 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-65ee-account-create-update-ktsbv" event={"ID":"49902100-6d13-4aa5-9e40-fd76424f5dd4","Type":"ContainerStarted","Data":"8954db1a57210f477c54de5298f0086cdb0d58b3828685067620e5b558fe56e0"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.011962 4805 generic.go:334] "Generic (PLEG): container finished" podID="7a86d7c1-b6fa-410b-abf0-3f809f09ce66" containerID="a8bf4f994787d3fd2c1a49c220460951d5500dff075ed1f3c574a020061c9ac8" exitCode=0 Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.012058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-49c2p" event={"ID":"7a86d7c1-b6fa-410b-abf0-3f809f09ce66","Type":"ContainerDied","Data":"a8bf4f994787d3fd2c1a49c220460951d5500dff075ed1f3c574a020061c9ac8"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.012111 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-49c2p" event={"ID":"7a86d7c1-b6fa-410b-abf0-3f809f09ce66","Type":"ContainerStarted","Data":"abeb99b192b99f428e590be42bb40422ffb82a7c9e3dc1ad069e05d70128f1f9"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.015383 4805 generic.go:334] "Generic (PLEG): container finished" podID="e8d5b185-9950-4ff2-b56c-278a766f3c02" containerID="6398831b4bc144e181e8b8a03b045a9fb8d0b974446785722711afffce070b18" exitCode=0 Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.015487 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8zf24" event={"ID":"e8d5b185-9950-4ff2-b56c-278a766f3c02","Type":"ContainerDied","Data":"6398831b4bc144e181e8b8a03b045a9fb8d0b974446785722711afffce070b18"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.015560 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8zf24" event={"ID":"e8d5b185-9950-4ff2-b56c-278a766f3c02","Type":"ContainerStarted","Data":"4bdc52164126d4a348bc93bdc57381cb21229c59268dd1db5ec969ced0c77fee"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.017133 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-shtt5" event={"ID":"4ea5126d-5794-4444-968f-696bee9afc30","Type":"ContainerStarted","Data":"10acc5d90365a74b990e03cdd59bdfce909ebec5ae0dfdb5bde6280761de76ab"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.018758 4805 generic.go:334] "Generic (PLEG): container finished" podID="0c3e5581-5041-48ca-be14-1220df2a86d8" containerID="1ed179f65704e7a0c7294a3db5a64d5b69d9131b7dce11a4f8b893cd233cf06b" exitCode=0 Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.018884 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n46mw" event={"ID":"0c3e5581-5041-48ca-be14-1220df2a86d8","Type":"ContainerDied","Data":"1ed179f65704e7a0c7294a3db5a64d5b69d9131b7dce11a4f8b893cd233cf06b"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.018937 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n46mw" event={"ID":"0c3e5581-5041-48ca-be14-1220df2a86d8","Type":"ContainerStarted","Data":"4a182433ea8096c66f78b215c7c54e183ea4e6a911c2190f00293bdf808d9f6a"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.021068 4805 generic.go:334] "Generic (PLEG): container finished" podID="449f249d-010e-41e2-8314-9dd16925c7ae" containerID="5f81022720a2755d99b601153f4f079e54187a9c57a8468b8d413a5c4407d35b" exitCode=0 Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.021161 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d98e-account-create-update-t8mk9" event={"ID":"449f249d-010e-41e2-8314-9dd16925c7ae","Type":"ContainerDied","Data":"5f81022720a2755d99b601153f4f079e54187a9c57a8468b8d413a5c4407d35b"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.021194 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d98e-account-create-update-t8mk9" event={"ID":"449f249d-010e-41e2-8314-9dd16925c7ae","Type":"ContainerStarted","Data":"b28d3daa25cf714ffd250e51a78ad344dda00329991596897bd7cb1855425bc0"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.023036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0e19-account-create-update-r4lvm" event={"ID":"944f457d-a34a-4f92-8172-a23175048fad","Type":"ContainerStarted","Data":"324f60f9df8ec20c4a538bd2878a5a03dbe8811610e12acb0a7291d479eca6c4"} Feb 16 21:15:43 crc kubenswrapper[4805]: I0216 21:15:43.034708 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s9lw5" event={"ID":"c9796afa-7a50-49c7-b85b-8e3075f92596","Type":"ContainerStarted","Data":"501eedea69c2f9c088b70d982f8179166c76c39a4356340f241c88512671eb96"} Feb 16 21:15:44 crc kubenswrapper[4805]: I0216 21:15:44.052496 4805 generic.go:334] "Generic (PLEG): container finished" podID="944f457d-a34a-4f92-8172-a23175048fad" containerID="41e0ab25fa658e62daeda20086a7cd82cbaf100a85732ed267b08193073defe2" exitCode=0 Feb 16 21:15:44 crc kubenswrapper[4805]: I0216 21:15:44.052552 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0e19-account-create-update-r4lvm" event={"ID":"944f457d-a34a-4f92-8172-a23175048fad","Type":"ContainerDied","Data":"41e0ab25fa658e62daeda20086a7cd82cbaf100a85732ed267b08193073defe2"} Feb 16 21:15:44 crc kubenswrapper[4805]: I0216 21:15:44.055148 4805 generic.go:334] "Generic (PLEG): container finished" podID="c9796afa-7a50-49c7-b85b-8e3075f92596" containerID="c3d37194446c317bc153ef00e60fc26239293ddc4fabf476a2383ba85b708744" exitCode=0 Feb 16 21:15:44 crc kubenswrapper[4805]: I0216 21:15:44.055228 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s9lw5" event={"ID":"c9796afa-7a50-49c7-b85b-8e3075f92596","Type":"ContainerDied","Data":"c3d37194446c317bc153ef00e60fc26239293ddc4fabf476a2383ba85b708744"} Feb 16 21:15:44 crc kubenswrapper[4805]: I0216 21:15:44.058875 4805 generic.go:334] "Generic (PLEG): container finished" podID="70e49f50-c6fb-46b5-89ae-aa379290cc57" containerID="cebfc106de1608a7537c583999716b1f39f3a11e09531400d653eec3f941695b" exitCode=0 Feb 16 21:15:44 crc kubenswrapper[4805]: I0216 21:15:44.058929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05bd-account-create-update-n2bst" event={"ID":"70e49f50-c6fb-46b5-89ae-aa379290cc57","Type":"ContainerDied","Data":"cebfc106de1608a7537c583999716b1f39f3a11e09531400d653eec3f941695b"} Feb 16 21:15:44 crc kubenswrapper[4805]: I0216 21:15:44.058968 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05bd-account-create-update-n2bst" event={"ID":"70e49f50-c6fb-46b5-89ae-aa379290cc57","Type":"ContainerStarted","Data":"a641dc62967fbc2b530fdf8a7c77a1f81fc1985edfdb282d21f0147f61560270"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.095180 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8zf24" event={"ID":"e8d5b185-9950-4ff2-b56c-278a766f3c02","Type":"ContainerDied","Data":"4bdc52164126d4a348bc93bdc57381cb21229c59268dd1db5ec969ced0c77fee"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.095615 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bdc52164126d4a348bc93bdc57381cb21229c59268dd1db5ec969ced0c77fee" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.099520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n46mw" event={"ID":"0c3e5581-5041-48ca-be14-1220df2a86d8","Type":"ContainerDied","Data":"4a182433ea8096c66f78b215c7c54e183ea4e6a911c2190f00293bdf808d9f6a"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.099549 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a182433ea8096c66f78b215c7c54e183ea4e6a911c2190f00293bdf808d9f6a" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.102464 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d98e-account-create-update-t8mk9" event={"ID":"449f249d-010e-41e2-8314-9dd16925c7ae","Type":"ContainerDied","Data":"b28d3daa25cf714ffd250e51a78ad344dda00329991596897bd7cb1855425bc0"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.102487 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b28d3daa25cf714ffd250e51a78ad344dda00329991596897bd7cb1855425bc0" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.104091 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05bd-account-create-update-n2bst" event={"ID":"70e49f50-c6fb-46b5-89ae-aa379290cc57","Type":"ContainerDied","Data":"a641dc62967fbc2b530fdf8a7c77a1f81fc1985edfdb282d21f0147f61560270"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.104113 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a641dc62967fbc2b530fdf8a7c77a1f81fc1985edfdb282d21f0147f61560270" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.109670 4805 generic.go:334] "Generic (PLEG): container finished" podID="c6769912-8cfc-48b8-b709-5398ca380e38" containerID="84b5052f6b1d46e1a9072fea4950033162181a3fc15e5f9510c0132ff442ebb9" exitCode=0 Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.109746 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6769912-8cfc-48b8-b709-5398ca380e38","Type":"ContainerDied","Data":"84b5052f6b1d46e1a9072fea4950033162181a3fc15e5f9510c0132ff442ebb9"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.112900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0e19-account-create-update-r4lvm" event={"ID":"944f457d-a34a-4f92-8172-a23175048fad","Type":"ContainerDied","Data":"324f60f9df8ec20c4a538bd2878a5a03dbe8811610e12acb0a7291d479eca6c4"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.112957 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="324f60f9df8ec20c4a538bd2878a5a03dbe8811610e12acb0a7291d479eca6c4" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.118516 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s9lw5" event={"ID":"c9796afa-7a50-49c7-b85b-8e3075f92596","Type":"ContainerDied","Data":"501eedea69c2f9c088b70d982f8179166c76c39a4356340f241c88512671eb96"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.118555 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="501eedea69c2f9c088b70d982f8179166c76c39a4356340f241c88512671eb96" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.120788 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-65ee-account-create-update-ktsbv" event={"ID":"49902100-6d13-4aa5-9e40-fd76424f5dd4","Type":"ContainerDied","Data":"8954db1a57210f477c54de5298f0086cdb0d58b3828685067620e5b558fe56e0"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.120844 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8954db1a57210f477c54de5298f0086cdb0d58b3828685067620e5b558fe56e0" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.122870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-49c2p" event={"ID":"7a86d7c1-b6fa-410b-abf0-3f809f09ce66","Type":"ContainerDied","Data":"abeb99b192b99f428e590be42bb40422ffb82a7c9e3dc1ad069e05d70128f1f9"} Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.122896 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abeb99b192b99f428e590be42bb40422ffb82a7c9e3dc1ad069e05d70128f1f9" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.276108 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.296233 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.308770 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-49c2p" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.332934 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.341522 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944f457d-a34a-4f92-8172-a23175048fad-operator-scripts\") pod \"944f457d-a34a-4f92-8172-a23175048fad\" (UID: \"944f457d-a34a-4f92-8172-a23175048fad\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.341699 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nb74\" (UniqueName: \"kubernetes.io/projected/944f457d-a34a-4f92-8172-a23175048fad-kube-api-access-5nb74\") pod \"944f457d-a34a-4f92-8172-a23175048fad\" (UID: \"944f457d-a34a-4f92-8172-a23175048fad\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.346161 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944f457d-a34a-4f92-8172-a23175048fad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "944f457d-a34a-4f92-8172-a23175048fad" (UID: "944f457d-a34a-4f92-8172-a23175048fad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.349952 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944f457d-a34a-4f92-8172-a23175048fad-kube-api-access-5nb74" (OuterVolumeSpecName: "kube-api-access-5nb74") pod "944f457d-a34a-4f92-8172-a23175048fad" (UID: "944f457d-a34a-4f92-8172-a23175048fad"). InnerVolumeSpecName "kube-api-access-5nb74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.356701 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.443699 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t724c\" (UniqueName: \"kubernetes.io/projected/49902100-6d13-4aa5-9e40-fd76424f5dd4-kube-api-access-t724c\") pod \"49902100-6d13-4aa5-9e40-fd76424f5dd4\" (UID: \"49902100-6d13-4aa5-9e40-fd76424f5dd4\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.443804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49902100-6d13-4aa5-9e40-fd76424f5dd4-operator-scripts\") pod \"49902100-6d13-4aa5-9e40-fd76424f5dd4\" (UID: \"49902100-6d13-4aa5-9e40-fd76424f5dd4\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.443847 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9796afa-7a50-49c7-b85b-8e3075f92596-operator-scripts\") pod \"c9796afa-7a50-49c7-b85b-8e3075f92596\" (UID: \"c9796afa-7a50-49c7-b85b-8e3075f92596\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.443867 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7snb\" (UniqueName: \"kubernetes.io/projected/c9796afa-7a50-49c7-b85b-8e3075f92596-kube-api-access-j7snb\") pod \"c9796afa-7a50-49c7-b85b-8e3075f92596\" (UID: \"c9796afa-7a50-49c7-b85b-8e3075f92596\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.443901 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flhx2\" (UniqueName: \"kubernetes.io/projected/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-kube-api-access-flhx2\") pod \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\" (UID: \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.443948 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70e49f50-c6fb-46b5-89ae-aa379290cc57-operator-scripts\") pod \"70e49f50-c6fb-46b5-89ae-aa379290cc57\" (UID: \"70e49f50-c6fb-46b5-89ae-aa379290cc57\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.444113 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9wjs\" (UniqueName: \"kubernetes.io/projected/70e49f50-c6fb-46b5-89ae-aa379290cc57-kube-api-access-h9wjs\") pod \"70e49f50-c6fb-46b5-89ae-aa379290cc57\" (UID: \"70e49f50-c6fb-46b5-89ae-aa379290cc57\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.444147 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-operator-scripts\") pod \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\" (UID: \"7a86d7c1-b6fa-410b-abf0-3f809f09ce66\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.444536 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944f457d-a34a-4f92-8172-a23175048fad-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.444550 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nb74\" (UniqueName: \"kubernetes.io/projected/944f457d-a34a-4f92-8172-a23175048fad-kube-api-access-5nb74\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.445009 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9796afa-7a50-49c7-b85b-8e3075f92596-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9796afa-7a50-49c7-b85b-8e3075f92596" (UID: "c9796afa-7a50-49c7-b85b-8e3075f92596"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.445035 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a86d7c1-b6fa-410b-abf0-3f809f09ce66" (UID: "7a86d7c1-b6fa-410b-abf0-3f809f09ce66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.445098 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e49f50-c6fb-46b5-89ae-aa379290cc57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70e49f50-c6fb-46b5-89ae-aa379290cc57" (UID: "70e49f50-c6fb-46b5-89ae-aa379290cc57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.445313 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49902100-6d13-4aa5-9e40-fd76424f5dd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49902100-6d13-4aa5-9e40-fd76424f5dd4" (UID: "49902100-6d13-4aa5-9e40-fd76424f5dd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.450073 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e49f50-c6fb-46b5-89ae-aa379290cc57-kube-api-access-h9wjs" (OuterVolumeSpecName: "kube-api-access-h9wjs") pod "70e49f50-c6fb-46b5-89ae-aa379290cc57" (UID: "70e49f50-c6fb-46b5-89ae-aa379290cc57"). InnerVolumeSpecName "kube-api-access-h9wjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.450469 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49902100-6d13-4aa5-9e40-fd76424f5dd4-kube-api-access-t724c" (OuterVolumeSpecName: "kube-api-access-t724c") pod "49902100-6d13-4aa5-9e40-fd76424f5dd4" (UID: "49902100-6d13-4aa5-9e40-fd76424f5dd4"). InnerVolumeSpecName "kube-api-access-t724c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.455794 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9796afa-7a50-49c7-b85b-8e3075f92596-kube-api-access-j7snb" (OuterVolumeSpecName: "kube-api-access-j7snb") pod "c9796afa-7a50-49c7-b85b-8e3075f92596" (UID: "c9796afa-7a50-49c7-b85b-8e3075f92596"). InnerVolumeSpecName "kube-api-access-j7snb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.455984 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.462950 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.466611 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-kube-api-access-flhx2" (OuterVolumeSpecName: "kube-api-access-flhx2") pod "7a86d7c1-b6fa-410b-abf0-3f809f09ce66" (UID: "7a86d7c1-b6fa-410b-abf0-3f809f09ce66"). InnerVolumeSpecName "kube-api-access-flhx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.476091 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.550546 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtthz\" (UniqueName: \"kubernetes.io/projected/449f249d-010e-41e2-8314-9dd16925c7ae-kube-api-access-rtthz\") pod \"449f249d-010e-41e2-8314-9dd16925c7ae\" (UID: \"449f249d-010e-41e2-8314-9dd16925c7ae\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.550643 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmw5\" (UniqueName: \"kubernetes.io/projected/e8d5b185-9950-4ff2-b56c-278a766f3c02-kube-api-access-6rmw5\") pod \"e8d5b185-9950-4ff2-b56c-278a766f3c02\" (UID: \"e8d5b185-9950-4ff2-b56c-278a766f3c02\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.550683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f249d-010e-41e2-8314-9dd16925c7ae-operator-scripts\") pod \"449f249d-010e-41e2-8314-9dd16925c7ae\" (UID: \"449f249d-010e-41e2-8314-9dd16925c7ae\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.550702 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3e5581-5041-48ca-be14-1220df2a86d8-operator-scripts\") pod \"0c3e5581-5041-48ca-be14-1220df2a86d8\" (UID: \"0c3e5581-5041-48ca-be14-1220df2a86d8\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.550780 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d5b185-9950-4ff2-b56c-278a766f3c02-operator-scripts\") pod \"e8d5b185-9950-4ff2-b56c-278a766f3c02\" (UID: \"e8d5b185-9950-4ff2-b56c-278a766f3c02\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.550888 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl287\" (UniqueName: \"kubernetes.io/projected/0c3e5581-5041-48ca-be14-1220df2a86d8-kube-api-access-pl287\") pod \"0c3e5581-5041-48ca-be14-1220df2a86d8\" (UID: \"0c3e5581-5041-48ca-be14-1220df2a86d8\") " Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.551331 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flhx2\" (UniqueName: \"kubernetes.io/projected/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-kube-api-access-flhx2\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.551345 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70e49f50-c6fb-46b5-89ae-aa379290cc57-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.551354 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9wjs\" (UniqueName: \"kubernetes.io/projected/70e49f50-c6fb-46b5-89ae-aa379290cc57-kube-api-access-h9wjs\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.551364 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a86d7c1-b6fa-410b-abf0-3f809f09ce66-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.551373 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t724c\" (UniqueName: \"kubernetes.io/projected/49902100-6d13-4aa5-9e40-fd76424f5dd4-kube-api-access-t724c\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.551381 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49902100-6d13-4aa5-9e40-fd76424f5dd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.551391 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9796afa-7a50-49c7-b85b-8e3075f92596-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.551399 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7snb\" (UniqueName: \"kubernetes.io/projected/c9796afa-7a50-49c7-b85b-8e3075f92596-kube-api-access-j7snb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.551934 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3e5581-5041-48ca-be14-1220df2a86d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c3e5581-5041-48ca-be14-1220df2a86d8" (UID: "0c3e5581-5041-48ca-be14-1220df2a86d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.552241 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d5b185-9950-4ff2-b56c-278a766f3c02-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8d5b185-9950-4ff2-b56c-278a766f3c02" (UID: "e8d5b185-9950-4ff2-b56c-278a766f3c02"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.554638 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449f249d-010e-41e2-8314-9dd16925c7ae-kube-api-access-rtthz" (OuterVolumeSpecName: "kube-api-access-rtthz") pod "449f249d-010e-41e2-8314-9dd16925c7ae" (UID: "449f249d-010e-41e2-8314-9dd16925c7ae"). InnerVolumeSpecName "kube-api-access-rtthz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.554692 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449f249d-010e-41e2-8314-9dd16925c7ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "449f249d-010e-41e2-8314-9dd16925c7ae" (UID: "449f249d-010e-41e2-8314-9dd16925c7ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.554937 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c3e5581-5041-48ca-be14-1220df2a86d8-kube-api-access-pl287" (OuterVolumeSpecName: "kube-api-access-pl287") pod "0c3e5581-5041-48ca-be14-1220df2a86d8" (UID: "0c3e5581-5041-48ca-be14-1220df2a86d8"). InnerVolumeSpecName "kube-api-access-pl287". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.583012 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d5b185-9950-4ff2-b56c-278a766f3c02-kube-api-access-6rmw5" (OuterVolumeSpecName: "kube-api-access-6rmw5") pod "e8d5b185-9950-4ff2-b56c-278a766f3c02" (UID: "e8d5b185-9950-4ff2-b56c-278a766f3c02"). InnerVolumeSpecName "kube-api-access-6rmw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.654203 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rmw5\" (UniqueName: \"kubernetes.io/projected/e8d5b185-9950-4ff2-b56c-278a766f3c02-kube-api-access-6rmw5\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.654236 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f249d-010e-41e2-8314-9dd16925c7ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.654245 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3e5581-5041-48ca-be14-1220df2a86d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.654253 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d5b185-9950-4ff2-b56c-278a766f3c02-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.654262 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl287\" (UniqueName: \"kubernetes.io/projected/0c3e5581-5041-48ca-be14-1220df2a86d8-kube-api-access-pl287\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:47 crc kubenswrapper[4805]: I0216 21:15:47.654270 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtthz\" (UniqueName: \"kubernetes.io/projected/449f249d-010e-41e2-8314-9dd16925c7ae-kube-api-access-rtthz\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.143790 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6769912-8cfc-48b8-b709-5398ca380e38","Type":"ContainerStarted","Data":"e5c5405dc3b2b66885a6c5262a36afe4523cd723bd7239545bed5f2c8b942a13"} Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.170292 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-49c2p" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.170351 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-shtt5" event={"ID":"4ea5126d-5794-4444-968f-696bee9afc30","Type":"ContainerStarted","Data":"d8caf2fe733e6092a6c221917d52fd2a4997f7c77debdaa93d964748f39b47e9"} Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.170435 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8zf24" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.170476 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n46mw" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.170488 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-65ee-account-create-update-ktsbv" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.170518 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0e19-account-create-update-r4lvm" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.170611 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s9lw5" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.170606 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d98e-account-create-update-t8mk9" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.177206 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05bd-account-create-update-n2bst" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.202604 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-shtt5" podStartSLOduration=2.6822062300000002 podStartE2EDuration="7.202582452s" podCreationTimestamp="2026-02-16 21:15:41 +0000 UTC" firstStartedPulling="2026-02-16 21:15:42.53560846 +0000 UTC m=+1160.354291755" lastFinishedPulling="2026-02-16 21:15:47.055984682 +0000 UTC m=+1164.874667977" observedRunningTime="2026-02-16 21:15:48.191306858 +0000 UTC m=+1166.009990163" watchObservedRunningTime="2026-02-16 21:15:48.202582452 +0000 UTC m=+1166.021265747" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.578305 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.660834 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lfl97"] Feb 16 21:15:48 crc kubenswrapper[4805]: I0216 21:15:48.661092 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" podUID="69ea890f-a85e-40d2-8722-71bcd489b1ec" containerName="dnsmasq-dns" containerID="cri-o://e900fe6df134219de0ae70ee025fa622baf07d7f21b83139a4369c9fd946a11c" gracePeriod=10 Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.180142 4805 generic.go:334] "Generic (PLEG): container finished" podID="69ea890f-a85e-40d2-8722-71bcd489b1ec" containerID="e900fe6df134219de0ae70ee025fa622baf07d7f21b83139a4369c9fd946a11c" exitCode=0 Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.180223 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" event={"ID":"69ea890f-a85e-40d2-8722-71bcd489b1ec","Type":"ContainerDied","Data":"e900fe6df134219de0ae70ee025fa622baf07d7f21b83139a4369c9fd946a11c"} Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.180494 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" event={"ID":"69ea890f-a85e-40d2-8722-71bcd489b1ec","Type":"ContainerDied","Data":"7458b66d1988adda4cecc8119c3b959cae64bfb9fafb922f80ef6ea2b34c1f96"} Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.180519 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7458b66d1988adda4cecc8119c3b959cae64bfb9fafb922f80ef6ea2b34c1f96" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.223650 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.290582 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-sb\") pod \"69ea890f-a85e-40d2-8722-71bcd489b1ec\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.290741 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x5dg\" (UniqueName: \"kubernetes.io/projected/69ea890f-a85e-40d2-8722-71bcd489b1ec-kube-api-access-2x5dg\") pod \"69ea890f-a85e-40d2-8722-71bcd489b1ec\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.290833 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-config\") pod \"69ea890f-a85e-40d2-8722-71bcd489b1ec\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.290916 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-dns-svc\") pod \"69ea890f-a85e-40d2-8722-71bcd489b1ec\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.290998 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-nb\") pod \"69ea890f-a85e-40d2-8722-71bcd489b1ec\" (UID: \"69ea890f-a85e-40d2-8722-71bcd489b1ec\") " Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.323932 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ea890f-a85e-40d2-8722-71bcd489b1ec-kube-api-access-2x5dg" (OuterVolumeSpecName: "kube-api-access-2x5dg") pod "69ea890f-a85e-40d2-8722-71bcd489b1ec" (UID: "69ea890f-a85e-40d2-8722-71bcd489b1ec"). InnerVolumeSpecName "kube-api-access-2x5dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.379059 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "69ea890f-a85e-40d2-8722-71bcd489b1ec" (UID: "69ea890f-a85e-40d2-8722-71bcd489b1ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.388807 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-config" (OuterVolumeSpecName: "config") pod "69ea890f-a85e-40d2-8722-71bcd489b1ec" (UID: "69ea890f-a85e-40d2-8722-71bcd489b1ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.393357 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.393388 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x5dg\" (UniqueName: \"kubernetes.io/projected/69ea890f-a85e-40d2-8722-71bcd489b1ec-kube-api-access-2x5dg\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.393400 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.397016 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "69ea890f-a85e-40d2-8722-71bcd489b1ec" (UID: "69ea890f-a85e-40d2-8722-71bcd489b1ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.398542 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "69ea890f-a85e-40d2-8722-71bcd489b1ec" (UID: "69ea890f-a85e-40d2-8722-71bcd489b1ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.495202 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:49 crc kubenswrapper[4805]: I0216 21:15:49.495270 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69ea890f-a85e-40d2-8722-71bcd489b1ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:50 crc kubenswrapper[4805]: I0216 21:15:50.194071 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lfl97" Feb 16 21:15:50 crc kubenswrapper[4805]: I0216 21:15:50.219207 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lfl97"] Feb 16 21:15:50 crc kubenswrapper[4805]: I0216 21:15:50.228597 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lfl97"] Feb 16 21:15:51 crc kubenswrapper[4805]: I0216 21:15:51.205669 4805 generic.go:334] "Generic (PLEG): container finished" podID="4ea5126d-5794-4444-968f-696bee9afc30" containerID="d8caf2fe733e6092a6c221917d52fd2a4997f7c77debdaa93d964748f39b47e9" exitCode=0 Feb 16 21:15:51 crc kubenswrapper[4805]: I0216 21:15:51.205805 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-shtt5" event={"ID":"4ea5126d-5794-4444-968f-696bee9afc30","Type":"ContainerDied","Data":"d8caf2fe733e6092a6c221917d52fd2a4997f7c77debdaa93d964748f39b47e9"} Feb 16 21:15:51 crc kubenswrapper[4805]: I0216 21:15:51.613524 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ea890f-a85e-40d2-8722-71bcd489b1ec" path="/var/lib/kubelet/pods/69ea890f-a85e-40d2-8722-71bcd489b1ec/volumes" Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.223614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6769912-8cfc-48b8-b709-5398ca380e38","Type":"ContainerStarted","Data":"c6c3d672e837d85daf0725a8c232488395aded14e485f112df999aa1151e90f4"} Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.223971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6769912-8cfc-48b8-b709-5398ca380e38","Type":"ContainerStarted","Data":"37572378bb4408b6b369209fa487c7e9e08408884ff518aa8ae94e38a6d40327"} Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.645112 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.663552 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.66353011 podStartE2EDuration="17.66353011s" podCreationTimestamp="2026-02-16 21:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:52.264376383 +0000 UTC m=+1170.083059688" watchObservedRunningTime="2026-02-16 21:15:52.66353011 +0000 UTC m=+1170.482213405" Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.764831 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxbzx\" (UniqueName: \"kubernetes.io/projected/4ea5126d-5794-4444-968f-696bee9afc30-kube-api-access-pxbzx\") pod \"4ea5126d-5794-4444-968f-696bee9afc30\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.765029 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-combined-ca-bundle\") pod \"4ea5126d-5794-4444-968f-696bee9afc30\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.765260 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-config-data\") pod \"4ea5126d-5794-4444-968f-696bee9afc30\" (UID: \"4ea5126d-5794-4444-968f-696bee9afc30\") " Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.777324 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea5126d-5794-4444-968f-696bee9afc30-kube-api-access-pxbzx" (OuterVolumeSpecName: "kube-api-access-pxbzx") pod "4ea5126d-5794-4444-968f-696bee9afc30" (UID: "4ea5126d-5794-4444-968f-696bee9afc30"). InnerVolumeSpecName "kube-api-access-pxbzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.814065 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ea5126d-5794-4444-968f-696bee9afc30" (UID: "4ea5126d-5794-4444-968f-696bee9afc30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.826962 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-config-data" (OuterVolumeSpecName: "config-data") pod "4ea5126d-5794-4444-968f-696bee9afc30" (UID: "4ea5126d-5794-4444-968f-696bee9afc30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.867196 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.867233 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea5126d-5794-4444-968f-696bee9afc30-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:52 crc kubenswrapper[4805]: I0216 21:15:52.867243 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxbzx\" (UniqueName: \"kubernetes.io/projected/4ea5126d-5794-4444-968f-696bee9afc30-kube-api-access-pxbzx\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.233969 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-shtt5" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.234120 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-shtt5" event={"ID":"4ea5126d-5794-4444-968f-696bee9afc30","Type":"ContainerDied","Data":"10acc5d90365a74b990e03cdd59bdfce909ebec5ae0dfdb5bde6280761de76ab"} Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.241859 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10acc5d90365a74b990e03cdd59bdfce909ebec5ae0dfdb5bde6280761de76ab" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.506460 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wwvck"] Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.510858 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea5126d-5794-4444-968f-696bee9afc30" containerName="keystone-db-sync" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.510949 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea5126d-5794-4444-968f-696bee9afc30" containerName="keystone-db-sync" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.511037 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a86d7c1-b6fa-410b-abf0-3f809f09ce66" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.511101 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a86d7c1-b6fa-410b-abf0-3f809f09ce66" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.511180 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449f249d-010e-41e2-8314-9dd16925c7ae" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.511231 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="449f249d-010e-41e2-8314-9dd16925c7ae" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.511280 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d5b185-9950-4ff2-b56c-278a766f3c02" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.511326 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d5b185-9950-4ff2-b56c-278a766f3c02" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.511381 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ea890f-a85e-40d2-8722-71bcd489b1ec" containerName="dnsmasq-dns" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.511427 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ea890f-a85e-40d2-8722-71bcd489b1ec" containerName="dnsmasq-dns" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.511485 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49902100-6d13-4aa5-9e40-fd76424f5dd4" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.511534 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="49902100-6d13-4aa5-9e40-fd76424f5dd4" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.511588 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944f457d-a34a-4f92-8172-a23175048fad" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.511635 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="944f457d-a34a-4f92-8172-a23175048fad" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.511694 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9796afa-7a50-49c7-b85b-8e3075f92596" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.511770 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9796afa-7a50-49c7-b85b-8e3075f92596" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.512489 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ea890f-a85e-40d2-8722-71bcd489b1ec" containerName="init" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.512572 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ea890f-a85e-40d2-8722-71bcd489b1ec" containerName="init" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.512646 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e49f50-c6fb-46b5-89ae-aa379290cc57" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.512701 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e49f50-c6fb-46b5-89ae-aa379290cc57" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: E0216 21:15:53.512767 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c3e5581-5041-48ca-be14-1220df2a86d8" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.512823 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c3e5581-5041-48ca-be14-1220df2a86d8" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.513040 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ea890f-a85e-40d2-8722-71bcd489b1ec" containerName="dnsmasq-dns" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.513102 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9796afa-7a50-49c7-b85b-8e3075f92596" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.513162 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c3e5581-5041-48ca-be14-1220df2a86d8" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.513220 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e49f50-c6fb-46b5-89ae-aa379290cc57" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.513277 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d5b185-9950-4ff2-b56c-278a766f3c02" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.513330 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea5126d-5794-4444-968f-696bee9afc30" containerName="keystone-db-sync" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.516757 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="944f457d-a34a-4f92-8172-a23175048fad" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.516856 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="49902100-6d13-4aa5-9e40-fd76424f5dd4" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.516917 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="449f249d-010e-41e2-8314-9dd16925c7ae" containerName="mariadb-account-create-update" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.516975 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a86d7c1-b6fa-410b-abf0-3f809f09ce66" containerName="mariadb-database-create" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.518085 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.567750 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wwvck"] Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.583505 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.583572 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-config\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.583634 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.583681 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.583710 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.583741 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlc9w\" (UniqueName: \"kubernetes.io/projected/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-kube-api-access-vlc9w\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.614682 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qgsth"] Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.617675 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.631424 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qgsth"] Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.631550 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.631784 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.631892 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.632019 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.632998 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xc6z5" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.685292 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.685576 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.685692 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-credential-keys\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.685777 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.685879 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlc9w\" (UniqueName: \"kubernetes.io/projected/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-kube-api-access-vlc9w\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.685968 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-scripts\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.686051 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-fernet-keys\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.686137 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-config-data\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.686234 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjlml\" (UniqueName: \"kubernetes.io/projected/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-kube-api-access-wjlml\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.686323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.686447 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-config\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.686548 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-combined-ca-bundle\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.687783 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.689747 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.689946 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.690979 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.691578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-config\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.715983 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlc9w\" (UniqueName: \"kubernetes.io/projected/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-kube-api-access-vlc9w\") pod \"dnsmasq-dns-bbf5cc879-wwvck\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.787761 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-config-data\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.787806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjlml\" (UniqueName: \"kubernetes.io/projected/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-kube-api-access-wjlml\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.787886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-combined-ca-bundle\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.787958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-credential-keys\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.787980 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-scripts\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.788015 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-fernet-keys\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.792873 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-qs466"] Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.794169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qs466" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.809646 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.809802 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rjr7h" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.811078 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-combined-ca-bundle\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.811555 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-fernet-keys\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.822925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjlml\" (UniqueName: \"kubernetes.io/projected/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-kube-api-access-wjlml\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.825327 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-scripts\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.827861 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qs466"] Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.830593 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-config-data\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.839204 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-credential-keys\") pod \"keystone-bootstrap-qgsth\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.852716 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.893204 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qxl6\" (UniqueName: \"kubernetes.io/projected/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-kube-api-access-8qxl6\") pod \"heat-db-sync-qs466\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " pod="openstack/heat-db-sync-qs466" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.893287 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-config-data\") pod \"heat-db-sync-qs466\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " pod="openstack/heat-db-sync-qs466" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.893345 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-combined-ca-bundle\") pod \"heat-db-sync-qs466\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " pod="openstack/heat-db-sync-qs466" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.940246 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.990101 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9ms99"] Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.991924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.995145 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qxl6\" (UniqueName: \"kubernetes.io/projected/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-kube-api-access-8qxl6\") pod \"heat-db-sync-qs466\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " pod="openstack/heat-db-sync-qs466" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.995230 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-config-data\") pod \"heat-db-sync-qs466\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " pod="openstack/heat-db-sync-qs466" Feb 16 21:15:53 crc kubenswrapper[4805]: I0216 21:15:53.995299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-combined-ca-bundle\") pod \"heat-db-sync-qs466\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " pod="openstack/heat-db-sync-qs466" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:53.998809 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9ms99"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.000079 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.000249 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.000443 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-r9xdp" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.008925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-combined-ca-bundle\") pod \"heat-db-sync-qs466\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " pod="openstack/heat-db-sync-qs466" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.019839 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-txbxn"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.020027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-config-data\") pod \"heat-db-sync-qs466\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " pod="openstack/heat-db-sync-qs466" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.021337 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.025505 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6kmxj" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.025687 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.026594 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-txbxn"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.072391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qxl6\" (UniqueName: \"kubernetes.io/projected/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-kube-api-access-8qxl6\") pod \"heat-db-sync-qs466\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " pod="openstack/heat-db-sync-qs466" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.078050 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8kxrn"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.088302 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.096438 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-s66bn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.096614 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.096713 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.097106 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-combined-ca-bundle\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.097161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-config-data\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.097224 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zxq9\" (UniqueName: \"kubernetes.io/projected/c8125a07-0bfb-4381-80e2-bf5bb1525026-kube-api-access-7zxq9\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.097292 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-db-sync-config-data\") pod \"barbican-db-sync-txbxn\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.097315 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdkf8\" (UniqueName: \"kubernetes.io/projected/ab6c7759-7bcf-4efa-b50f-b73e87f20842-kube-api-access-fdkf8\") pod \"barbican-db-sync-txbxn\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.097352 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-combined-ca-bundle\") pod \"barbican-db-sync-txbxn\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.097402 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-db-sync-config-data\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.097438 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8125a07-0bfb-4381-80e2-bf5bb1525026-etc-machine-id\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.097453 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-scripts\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.115252 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8kxrn"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.203423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-combined-ca-bundle\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.213883 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fppn6\" (UniqueName: \"kubernetes.io/projected/1050edad-f277-4299-ab1d-c812bc4ae573-kube-api-access-fppn6\") pod \"neutron-db-sync-8kxrn\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.213945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-config-data\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.214094 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zxq9\" (UniqueName: \"kubernetes.io/projected/c8125a07-0bfb-4381-80e2-bf5bb1525026-kube-api-access-7zxq9\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.214155 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-db-sync-config-data\") pod \"barbican-db-sync-txbxn\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.214205 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdkf8\" (UniqueName: \"kubernetes.io/projected/ab6c7759-7bcf-4efa-b50f-b73e87f20842-kube-api-access-fdkf8\") pod \"barbican-db-sync-txbxn\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.214302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-combined-ca-bundle\") pod \"barbican-db-sync-txbxn\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.214436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-db-sync-config-data\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.214481 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-combined-ca-bundle\") pod \"neutron-db-sync-8kxrn\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.214547 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8125a07-0bfb-4381-80e2-bf5bb1525026-etc-machine-id\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.214562 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-scripts\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.214629 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-config\") pod \"neutron-db-sync-8kxrn\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.215394 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8125a07-0bfb-4381-80e2-bf5bb1525026-etc-machine-id\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.220339 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-config-data\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.221394 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-combined-ca-bundle\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.225331 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-db-sync-config-data\") pod \"barbican-db-sync-txbxn\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.227005 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-db-sync-config-data\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.227119 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-scripts\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.227700 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-z62rr"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.232286 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qs466" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.234050 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-combined-ca-bundle\") pod \"barbican-db-sync-txbxn\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.234532 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.236954 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdkf8\" (UniqueName: \"kubernetes.io/projected/ab6c7759-7bcf-4efa-b50f-b73e87f20842-kube-api-access-fdkf8\") pod \"barbican-db-sync-txbxn\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.239135 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zxq9\" (UniqueName: \"kubernetes.io/projected/c8125a07-0bfb-4381-80e2-bf5bb1525026-kube-api-access-7zxq9\") pod \"cinder-db-sync-9ms99\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.250541 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-79kmv" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.250766 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.250876 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.267663 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z62rr"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.305951 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wwvck"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.320463 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crgk5\" (UniqueName: \"kubernetes.io/projected/d50fc8fa-34b3-48cf-9e68-c474509271a3-kube-api-access-crgk5\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.320554 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-combined-ca-bundle\") pod \"neutron-db-sync-8kxrn\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.320597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-scripts\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.320619 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50fc8fa-34b3-48cf-9e68-c474509271a3-logs\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.320647 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-config\") pod \"neutron-db-sync-8kxrn\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.320680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppn6\" (UniqueName: \"kubernetes.io/projected/1050edad-f277-4299-ab1d-c812bc4ae573-kube-api-access-fppn6\") pod \"neutron-db-sync-8kxrn\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.320707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-combined-ca-bundle\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.320777 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-config-data\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.322183 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wb67p"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.323854 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.328013 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-combined-ca-bundle\") pod \"neutron-db-sync-8kxrn\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.337554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-config\") pod \"neutron-db-sync-8kxrn\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.348072 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppn6\" (UniqueName: \"kubernetes.io/projected/1050edad-f277-4299-ab1d-c812bc4ae573-kube-api-access-fppn6\") pod \"neutron-db-sync-8kxrn\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.357870 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wb67p"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.371810 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.374039 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ms99" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.380943 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.392292 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.395598 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.413086 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-txbxn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.429284 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.429353 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.429621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-scripts\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.429678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50fc8fa-34b3-48cf-9e68-c474509271a3-logs\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.429830 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-run-httpd\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.429928 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r79f8\" (UniqueName: \"kubernetes.io/projected/104ec6b3-3a02-486e-8948-0aeb16bbddd8-kube-api-access-r79f8\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430041 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-log-httpd\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430093 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gvrl\" (UniqueName: \"kubernetes.io/projected/eea9ce90-7516-47bc-844e-224cf41929e4-kube-api-access-9gvrl\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-combined-ca-bundle\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430442 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-config-data\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-config-data\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-config\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.430970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crgk5\" (UniqueName: \"kubernetes.io/projected/d50fc8fa-34b3-48cf-9e68-c474509271a3-kube-api-access-crgk5\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.431017 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-scripts\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.436412 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50fc8fa-34b3-48cf-9e68-c474509271a3-logs\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.436536 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.441039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-config-data\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.443673 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-scripts\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.447827 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.472432 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-combined-ca-bundle\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.483054 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crgk5\" (UniqueName: \"kubernetes.io/projected/d50fc8fa-34b3-48cf-9e68-c474509271a3-kube-api-access-crgk5\") pod \"placement-db-sync-z62rr\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.538857 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-config\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.539443 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-scripts\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.539618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.539751 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.539913 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-run-httpd\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.540068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r79f8\" (UniqueName: \"kubernetes.io/projected/104ec6b3-3a02-486e-8948-0aeb16bbddd8-kube-api-access-r79f8\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.540165 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-log-httpd\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.540266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gvrl\" (UniqueName: \"kubernetes.io/projected/eea9ce90-7516-47bc-844e-224cf41929e4-kube-api-access-9gvrl\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.540394 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.540492 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.540589 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.540678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.540879 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-config-data\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.539969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-config\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.540556 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.545144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.547201 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-log-httpd\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.547612 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-run-httpd\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.547676 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.548106 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.548483 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.550765 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-config-data\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.553020 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-scripts\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.555045 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.567372 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gvrl\" (UniqueName: \"kubernetes.io/projected/eea9ce90-7516-47bc-844e-224cf41929e4-kube-api-access-9gvrl\") pod \"dnsmasq-dns-56df8fb6b7-wb67p\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.567667 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r79f8\" (UniqueName: \"kubernetes.io/projected/104ec6b3-3a02-486e-8948-0aeb16bbddd8-kube-api-access-r79f8\") pod \"ceilometer-0\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.588896 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z62rr" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.594603 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wwvck"] Feb 16 21:15:54 crc kubenswrapper[4805]: W0216 21:15:54.635666 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bcfb55b_6754_4f9f_a69c_744ebb90bbc8.slice/crio-d5c86cad164c92ec2e11a351525bd26746c7e2c42132ef407037c809dfab4756 WatchSource:0}: Error finding container d5c86cad164c92ec2e11a351525bd26746c7e2c42132ef407037c809dfab4756: Status 404 returned error can't find the container with id d5c86cad164c92ec2e11a351525bd26746c7e2c42132ef407037c809dfab4756 Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.658251 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.781762 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.783388 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.796960 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.800642 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.801442 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.801677 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hrrrc" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.802491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.802621 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.911353 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qs466"] Feb 16 21:15:54 crc kubenswrapper[4805]: W0216 21:15:54.938940 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda21f2059_e2d3_46c7_bbf9_2a285e4d1662.slice/crio-9f323256b836c07b86be39e4764440d114b1b4fcdcb37373ee85bd33794dc2d8 WatchSource:0}: Error finding container 9f323256b836c07b86be39e4764440d114b1b4fcdcb37373ee85bd33794dc2d8: Status 404 returned error can't find the container with id 9f323256b836c07b86be39e4764440d114b1b4fcdcb37373ee85bd33794dc2d8 Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.941788 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qgsth"] Feb 16 21:15:54 crc kubenswrapper[4805]: W0216 21:15:54.944671 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1ec9fe_bc8c_47c8_a720_2e64cb0da40b.slice/crio-34f6e186da7eed2c50ce08151a27da7ce949b306e7a3053eaaf6661e92ac67d2 WatchSource:0}: Error finding container 34f6e186da7eed2c50ce08151a27da7ce949b306e7a3053eaaf6661e92ac67d2: Status 404 returned error can't find the container with id 34f6e186da7eed2c50ce08151a27da7ce949b306e7a3053eaaf6661e92ac67d2 Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.951904 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spdfk\" (UniqueName: \"kubernetes.io/projected/3584cba2-4c2c-4779-b7ff-80ec20f500cb-kube-api-access-spdfk\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.951965 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.951994 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.953288 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.953359 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.953408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.953475 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.953634 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-logs\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.955918 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.958529 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.964801 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.964944 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:15:54 crc kubenswrapper[4805]: I0216 21:15:54.976089 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057399 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-logs\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057449 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057513 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057534 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxnk\" (UniqueName: \"kubernetes.io/projected/7ff87923-c71a-4c5e-9c05-6c06608e7e27-kube-api-access-4nxnk\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057549 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-logs\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057567 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spdfk\" (UniqueName: \"kubernetes.io/projected/3584cba2-4c2c-4779-b7ff-80ec20f500cb-kube-api-access-spdfk\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057587 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057612 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057642 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057665 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057691 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057744 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057769 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057796 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.057821 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.058320 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-logs\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.059192 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.073036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.074582 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.074608 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/13fbba481ac34178d672430e609e409da28aa7e56b577de46c4337378ecf394e/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.075477 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spdfk\" (UniqueName: \"kubernetes.io/projected/3584cba2-4c2c-4779-b7ff-80ec20f500cb-kube-api-access-spdfk\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.080597 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.080922 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.092458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.159810 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.159884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.159907 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxnk\" (UniqueName: \"kubernetes.io/projected/7ff87923-c71a-4c5e-9c05-6c06608e7e27-kube-api-access-4nxnk\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.159929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-logs\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.159954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.159979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.160035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.160061 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.163646 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.163762 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-logs\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.180591 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.180797 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.180835 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4cb8b45edeb659dc9877cd079ad06833b4a4e61f890a1a00cd5e71596d9e0ea/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.181681 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.181765 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.182931 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.189773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxnk\" (UniqueName: \"kubernetes.io/projected/7ff87923-c71a-4c5e-9c05-6c06608e7e27-kube-api-access-4nxnk\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.207589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.231653 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9ms99"] Feb 16 21:15:55 crc kubenswrapper[4805]: W0216 21:15:55.240027 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab6c7759_7bcf_4efa_b50f_b73e87f20842.slice/crio-49dd8717b11d30e402d3967d7fb919fcbdb6837de27b30bc0d25f2ce23842765 WatchSource:0}: Error finding container 49dd8717b11d30e402d3967d7fb919fcbdb6837de27b30bc0d25f2ce23842765: Status 404 returned error can't find the container with id 49dd8717b11d30e402d3967d7fb919fcbdb6837de27b30bc0d25f2ce23842765 Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.241495 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-txbxn"] Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.282191 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" event={"ID":"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8","Type":"ContainerStarted","Data":"d5c86cad164c92ec2e11a351525bd26746c7e2c42132ef407037c809dfab4756"} Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.282902 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" podUID="0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" containerName="init" containerID="cri-o://9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e" gracePeriod=10 Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.284259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ms99" event={"ID":"c8125a07-0bfb-4381-80e2-bf5bb1525026","Type":"ContainerStarted","Data":"68d1ab1edf6967f6797f52fadbf4205ae99764541882d0167771c9a02976e1a0"} Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.285485 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-txbxn" event={"ID":"ab6c7759-7bcf-4efa-b50f-b73e87f20842","Type":"ContainerStarted","Data":"49dd8717b11d30e402d3967d7fb919fcbdb6837de27b30bc0d25f2ce23842765"} Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.294560 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qgsth" event={"ID":"a21f2059-e2d3-46c7-bbf9-2a285e4d1662","Type":"ContainerStarted","Data":"9f323256b836c07b86be39e4764440d114b1b4fcdcb37373ee85bd33794dc2d8"} Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.297028 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qs466" event={"ID":"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b","Type":"ContainerStarted","Data":"34f6e186da7eed2c50ce08151a27da7ce949b306e7a3053eaaf6661e92ac67d2"} Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.302598 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.343695 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qgsth" podStartSLOduration=2.343671275 podStartE2EDuration="2.343671275s" podCreationTimestamp="2026-02-16 21:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:55.329713556 +0000 UTC m=+1173.148396851" watchObservedRunningTime="2026-02-16 21:15:55.343671275 +0000 UTC m=+1173.162354570" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.426170 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.446026 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.450559 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8kxrn"] Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.487555 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z62rr"] Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.501339 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wb67p"] Feb 16 21:15:55 crc kubenswrapper[4805]: I0216 21:15:55.689159 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:55 crc kubenswrapper[4805]: W0216 21:15:55.754521 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod104ec6b3_3a02_486e_8948_0aeb16bbddd8.slice/crio-16f230cc4b46c4ed7af82302fd0cfe6ffcbaaed5e7483ad636ecbd747862ac28 WatchSource:0}: Error finding container 16f230cc4b46c4ed7af82302fd0cfe6ffcbaaed5e7483ad636ecbd747862ac28: Status 404 returned error can't find the container with id 16f230cc4b46c4ed7af82302fd0cfe6ffcbaaed5e7483ad636ecbd747862ac28 Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.047532 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.196284 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-nb\") pod \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.196340 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-svc\") pod \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.196381 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-sb\") pod \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.196563 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-config\") pod \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.196606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-swift-storage-0\") pod \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.196626 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlc9w\" (UniqueName: \"kubernetes.io/projected/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-kube-api-access-vlc9w\") pod \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\" (UID: \"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8\") " Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.210956 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-kube-api-access-vlc9w" (OuterVolumeSpecName: "kube-api-access-vlc9w") pod "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" (UID: "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8"). InnerVolumeSpecName "kube-api-access-vlc9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.259726 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" (UID: "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.277242 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" (UID: "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.291919 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-config" (OuterVolumeSpecName: "config") pod "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" (UID: "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.294985 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" (UID: "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.299292 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" (UID: "0bcfb55b-6754-4f9f-a69c-744ebb90bbc8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.299882 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.299897 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.299909 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlc9w\" (UniqueName: \"kubernetes.io/projected/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-kube-api-access-vlc9w\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.299918 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.299926 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.299933 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.320254 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qgsth" event={"ID":"a21f2059-e2d3-46c7-bbf9-2a285e4d1662","Type":"ContainerStarted","Data":"a5624c98f24b25798eff966ebf252cc87d1fd5df04a3d9250be3e0700b32bd41"} Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.330562 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.335250 4805 generic.go:334] "Generic (PLEG): container finished" podID="0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" containerID="9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e" exitCode=0 Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.335339 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" event={"ID":"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8","Type":"ContainerDied","Data":"9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e"} Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.335368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" event={"ID":"0bcfb55b-6754-4f9f-a69c-744ebb90bbc8","Type":"ContainerDied","Data":"d5c86cad164c92ec2e11a351525bd26746c7e2c42132ef407037c809dfab4756"} Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.335385 4805 scope.go:117] "RemoveContainer" containerID="9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.335487 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-wwvck" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.341881 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerStarted","Data":"16f230cc4b46c4ed7af82302fd0cfe6ffcbaaed5e7483ad636ecbd747862ac28"} Feb 16 21:15:56 crc kubenswrapper[4805]: W0216 21:15:56.345169 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3584cba2_4c2c_4779_b7ff_80ec20f500cb.slice/crio-11fee0ef2f775a1766c59fd5342722e0b12759f6f0b1194c93a7eb0e0c71cac6 WatchSource:0}: Error finding container 11fee0ef2f775a1766c59fd5342722e0b12759f6f0b1194c93a7eb0e0c71cac6: Status 404 returned error can't find the container with id 11fee0ef2f775a1766c59fd5342722e0b12759f6f0b1194c93a7eb0e0c71cac6 Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.355251 4805 generic.go:334] "Generic (PLEG): container finished" podID="eea9ce90-7516-47bc-844e-224cf41929e4" containerID="9dd86db6be87deef5f71a186cce137063380fd472f167fd879141be139d1371f" exitCode=0 Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.355311 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" event={"ID":"eea9ce90-7516-47bc-844e-224cf41929e4","Type":"ContainerDied","Data":"9dd86db6be87deef5f71a186cce137063380fd472f167fd879141be139d1371f"} Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.355338 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" event={"ID":"eea9ce90-7516-47bc-844e-224cf41929e4","Type":"ContainerStarted","Data":"e5ea28d6a8170d34d961f205b19a5fc0130b66cc00338ad41ee3ffabf49ff5ed"} Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.372797 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.386381 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z62rr" event={"ID":"d50fc8fa-34b3-48cf-9e68-c474509271a3","Type":"ContainerStarted","Data":"eb48b27cb779e0199cf17f37df18f2d34c4702e71bf3cc5892869bc36cd9631e"} Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.401469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8kxrn" event={"ID":"1050edad-f277-4299-ab1d-c812bc4ae573","Type":"ContainerStarted","Data":"41bab549e938c4c3e30c40fb4e65fdb531eb5ac78d3d654fbf2763c1e4b63392"} Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.401514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8kxrn" event={"ID":"1050edad-f277-4299-ab1d-c812bc4ae573","Type":"ContainerStarted","Data":"8179dff601be917e1498c853ef705204e88d40b3c2e946b6f7343b951304abd1"} Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.435163 4805 scope.go:117] "RemoveContainer" containerID="9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e" Feb 16 21:15:56 crc kubenswrapper[4805]: E0216 21:15:56.438315 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e\": container with ID starting with 9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e not found: ID does not exist" containerID="9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.438358 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e"} err="failed to get container status \"9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e\": rpc error: code = NotFound desc = could not find container \"9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e\": container with ID starting with 9d9ef0f090ecffb29cd819a791a3034fc06c45f67b4591cf7a0afbd2dadc304e not found: ID does not exist" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.517233 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.565216 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wwvck"] Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.584068 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wwvck"] Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.592519 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-8kxrn" podStartSLOduration=2.59249997 podStartE2EDuration="2.59249997s" podCreationTimestamp="2026-02-16 21:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:56.454026034 +0000 UTC m=+1174.272709339" watchObservedRunningTime="2026-02-16 21:15:56.59249997 +0000 UTC m=+1174.411183265" Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.633098 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:15:56 crc kubenswrapper[4805]: I0216 21:15:56.788421 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:15:57 crc kubenswrapper[4805]: I0216 21:15:57.090833 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:15:57 crc kubenswrapper[4805]: I0216 21:15:57.414391 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" event={"ID":"eea9ce90-7516-47bc-844e-224cf41929e4","Type":"ContainerStarted","Data":"ca90aef24be827db8fb2ae475ed88f2ea679d9c43700d8e5fe123bdf5da1f7fe"} Feb 16 21:15:57 crc kubenswrapper[4805]: I0216 21:15:57.414670 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:15:57 crc kubenswrapper[4805]: I0216 21:15:57.435089 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" podStartSLOduration=3.434640738 podStartE2EDuration="3.434640738s" podCreationTimestamp="2026-02-16 21:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:57.431180101 +0000 UTC m=+1175.249863396" watchObservedRunningTime="2026-02-16 21:15:57.434640738 +0000 UTC m=+1175.253324033" Feb 16 21:15:57 crc kubenswrapper[4805]: I0216 21:15:57.436980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ff87923-c71a-4c5e-9c05-6c06608e7e27","Type":"ContainerStarted","Data":"95f238ce2a30422da75f7fe627756beab39bf5ac501eced4db1400b55b4342db"} Feb 16 21:15:57 crc kubenswrapper[4805]: I0216 21:15:57.448547 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3584cba2-4c2c-4779-b7ff-80ec20f500cb","Type":"ContainerStarted","Data":"11fee0ef2f775a1766c59fd5342722e0b12759f6f0b1194c93a7eb0e0c71cac6"} Feb 16 21:15:57 crc kubenswrapper[4805]: I0216 21:15:57.625469 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" path="/var/lib/kubelet/pods/0bcfb55b-6754-4f9f-a69c-744ebb90bbc8/volumes" Feb 16 21:15:58 crc kubenswrapper[4805]: I0216 21:15:58.525739 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ff87923-c71a-4c5e-9c05-6c06608e7e27","Type":"ContainerStarted","Data":"5587dcc56d5574a23616f83c1d7d96e31faf12b06481ee5c59f67582403a9bf7"} Feb 16 21:15:58 crc kubenswrapper[4805]: I0216 21:15:58.541850 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3584cba2-4c2c-4779-b7ff-80ec20f500cb","Type":"ContainerStarted","Data":"414885d3bfdd5fdbd02b0a63092f3109aa6d734a9272d1ba83b28dc7867b6cc3"} Feb 16 21:15:59 crc kubenswrapper[4805]: I0216 21:15:59.557936 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ff87923-c71a-4c5e-9c05-6c06608e7e27","Type":"ContainerStarted","Data":"a9ca9238e6eccae743f110691e55c5804a839bbecc530e4d088bec0672621933"} Feb 16 21:15:59 crc kubenswrapper[4805]: I0216 21:15:59.558265 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerName="glance-log" containerID="cri-o://5587dcc56d5574a23616f83c1d7d96e31faf12b06481ee5c59f67582403a9bf7" gracePeriod=30 Feb 16 21:15:59 crc kubenswrapper[4805]: I0216 21:15:59.558287 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerName="glance-httpd" containerID="cri-o://a9ca9238e6eccae743f110691e55c5804a839bbecc530e4d088bec0672621933" gracePeriod=30 Feb 16 21:15:59 crc kubenswrapper[4805]: I0216 21:15:59.568464 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3584cba2-4c2c-4779-b7ff-80ec20f500cb","Type":"ContainerStarted","Data":"77ec55563c3030b53e8f84c227e221be4f879f4c75dd91cb646236089f56ad49"} Feb 16 21:15:59 crc kubenswrapper[4805]: I0216 21:15:59.568599 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerName="glance-log" containerID="cri-o://414885d3bfdd5fdbd02b0a63092f3109aa6d734a9272d1ba83b28dc7867b6cc3" gracePeriod=30 Feb 16 21:15:59 crc kubenswrapper[4805]: I0216 21:15:59.569469 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerName="glance-httpd" containerID="cri-o://77ec55563c3030b53e8f84c227e221be4f879f4c75dd91cb646236089f56ad49" gracePeriod=30 Feb 16 21:15:59 crc kubenswrapper[4805]: I0216 21:15:59.586542 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.586522862 podStartE2EDuration="6.586522862s" podCreationTimestamp="2026-02-16 21:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:59.580112892 +0000 UTC m=+1177.398796187" watchObservedRunningTime="2026-02-16 21:15:59.586522862 +0000 UTC m=+1177.405206157" Feb 16 21:15:59 crc kubenswrapper[4805]: I0216 21:15:59.612237 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.6122184189999995 podStartE2EDuration="6.612218419s" podCreationTimestamp="2026-02-16 21:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:59.600974605 +0000 UTC m=+1177.419657910" watchObservedRunningTime="2026-02-16 21:15:59.612218419 +0000 UTC m=+1177.430901714" Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.588643 4805 generic.go:334] "Generic (PLEG): container finished" podID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerID="77ec55563c3030b53e8f84c227e221be4f879f4c75dd91cb646236089f56ad49" exitCode=0 Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.589004 4805 generic.go:334] "Generic (PLEG): container finished" podID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerID="414885d3bfdd5fdbd02b0a63092f3109aa6d734a9272d1ba83b28dc7867b6cc3" exitCode=143 Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.588740 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3584cba2-4c2c-4779-b7ff-80ec20f500cb","Type":"ContainerDied","Data":"77ec55563c3030b53e8f84c227e221be4f879f4c75dd91cb646236089f56ad49"} Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.589077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3584cba2-4c2c-4779-b7ff-80ec20f500cb","Type":"ContainerDied","Data":"414885d3bfdd5fdbd02b0a63092f3109aa6d734a9272d1ba83b28dc7867b6cc3"} Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.592398 4805 generic.go:334] "Generic (PLEG): container finished" podID="a21f2059-e2d3-46c7-bbf9-2a285e4d1662" containerID="a5624c98f24b25798eff966ebf252cc87d1fd5df04a3d9250be3e0700b32bd41" exitCode=0 Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.592444 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qgsth" event={"ID":"a21f2059-e2d3-46c7-bbf9-2a285e4d1662","Type":"ContainerDied","Data":"a5624c98f24b25798eff966ebf252cc87d1fd5df04a3d9250be3e0700b32bd41"} Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.597691 4805 generic.go:334] "Generic (PLEG): container finished" podID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerID="a9ca9238e6eccae743f110691e55c5804a839bbecc530e4d088bec0672621933" exitCode=0 Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.597746 4805 generic.go:334] "Generic (PLEG): container finished" podID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerID="5587dcc56d5574a23616f83c1d7d96e31faf12b06481ee5c59f67582403a9bf7" exitCode=143 Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.597776 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ff87923-c71a-4c5e-9c05-6c06608e7e27","Type":"ContainerDied","Data":"a9ca9238e6eccae743f110691e55c5804a839bbecc530e4d088bec0672621933"} Feb 16 21:16:00 crc kubenswrapper[4805]: I0216 21:16:00.597801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ff87923-c71a-4c5e-9c05-6c06608e7e27","Type":"ContainerDied","Data":"5587dcc56d5574a23616f83c1d7d96e31faf12b06481ee5c59f67582403a9bf7"} Feb 16 21:16:04 crc kubenswrapper[4805]: I0216 21:16:04.659962 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:16:04 crc kubenswrapper[4805]: I0216 21:16:04.756841 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-87s5g"] Feb 16 21:16:04 crc kubenswrapper[4805]: I0216 21:16:04.757711 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerName="dnsmasq-dns" containerID="cri-o://5f89293ae1efb0c2dd0ceae3c7f8e96b41213720ec9e330d4f11df65222d7ac7" gracePeriod=10 Feb 16 21:16:05 crc kubenswrapper[4805]: I0216 21:16:05.669147 4805 generic.go:334] "Generic (PLEG): container finished" podID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerID="5f89293ae1efb0c2dd0ceae3c7f8e96b41213720ec9e330d4f11df65222d7ac7" exitCode=0 Feb 16 21:16:05 crc kubenswrapper[4805]: I0216 21:16:05.669226 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" event={"ID":"b3c33787-c8a9-46fa-b932-cedfe284d377","Type":"ContainerDied","Data":"5f89293ae1efb0c2dd0ceae3c7f8e96b41213720ec9e330d4f11df65222d7ac7"} Feb 16 21:16:06 crc kubenswrapper[4805]: I0216 21:16:06.370860 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 21:16:06 crc kubenswrapper[4805]: I0216 21:16:06.379885 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 21:16:06 crc kubenswrapper[4805]: I0216 21:16:06.684342 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 21:16:08 crc kubenswrapper[4805]: I0216 21:16:08.577754 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.466700 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.531809 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjlml\" (UniqueName: \"kubernetes.io/projected/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-kube-api-access-wjlml\") pod \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.531873 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-credential-keys\") pod \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.531955 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-combined-ca-bundle\") pod \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.531980 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-config-data\") pod \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.532014 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-scripts\") pod \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.532074 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-fernet-keys\") pod \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\" (UID: \"a21f2059-e2d3-46c7-bbf9-2a285e4d1662\") " Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.539845 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-kube-api-access-wjlml" (OuterVolumeSpecName: "kube-api-access-wjlml") pod "a21f2059-e2d3-46c7-bbf9-2a285e4d1662" (UID: "a21f2059-e2d3-46c7-bbf9-2a285e4d1662"). InnerVolumeSpecName "kube-api-access-wjlml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.543645 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-scripts" (OuterVolumeSpecName: "scripts") pod "a21f2059-e2d3-46c7-bbf9-2a285e4d1662" (UID: "a21f2059-e2d3-46c7-bbf9-2a285e4d1662"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.547347 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a21f2059-e2d3-46c7-bbf9-2a285e4d1662" (UID: "a21f2059-e2d3-46c7-bbf9-2a285e4d1662"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.577945 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a21f2059-e2d3-46c7-bbf9-2a285e4d1662" (UID: "a21f2059-e2d3-46c7-bbf9-2a285e4d1662"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.591489 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a21f2059-e2d3-46c7-bbf9-2a285e4d1662" (UID: "a21f2059-e2d3-46c7-bbf9-2a285e4d1662"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.619857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-config-data" (OuterVolumeSpecName: "config-data") pod "a21f2059-e2d3-46c7-bbf9-2a285e4d1662" (UID: "a21f2059-e2d3-46c7-bbf9-2a285e4d1662"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.639186 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjlml\" (UniqueName: \"kubernetes.io/projected/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-kube-api-access-wjlml\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.639226 4805 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.639238 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.639251 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.639267 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.639278 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a21f2059-e2d3-46c7-bbf9-2a285e4d1662-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.745978 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qgsth" event={"ID":"a21f2059-e2d3-46c7-bbf9-2a285e4d1662","Type":"ContainerDied","Data":"9f323256b836c07b86be39e4764440d114b1b4fcdcb37373ee85bd33794dc2d8"} Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.746018 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f323256b836c07b86be39e4764440d114b1b4fcdcb37373ee85bd33794dc2d8" Feb 16 21:16:11 crc kubenswrapper[4805]: I0216 21:16:11.746443 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qgsth" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.553434 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qgsth"] Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.566230 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qgsth"] Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.666650 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2255v"] Feb 16 21:16:12 crc kubenswrapper[4805]: E0216 21:16:12.667307 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21f2059-e2d3-46c7-bbf9-2a285e4d1662" containerName="keystone-bootstrap" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.667325 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21f2059-e2d3-46c7-bbf9-2a285e4d1662" containerName="keystone-bootstrap" Feb 16 21:16:12 crc kubenswrapper[4805]: E0216 21:16:12.667336 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" containerName="init" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.667343 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" containerName="init" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.667540 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a21f2059-e2d3-46c7-bbf9-2a285e4d1662" containerName="keystone-bootstrap" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.667555 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcfb55b-6754-4f9f-a69c-744ebb90bbc8" containerName="init" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.669116 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.674771 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xc6z5" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.675007 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.675601 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.675772 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.679340 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2255v"] Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.681038 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.865522 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-scripts\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.865662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-credential-keys\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.865752 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-combined-ca-bundle\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.865811 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-config-data\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.865833 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-fernet-keys\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.865865 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8l42\" (UniqueName: \"kubernetes.io/projected/0d3ea232-36aa-48a2-b2d4-449767fd61fb-kube-api-access-l8l42\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.968654 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-scripts\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.968791 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-credential-keys\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.968872 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-combined-ca-bundle\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.968929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-config-data\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.969005 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-fernet-keys\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.969535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8l42\" (UniqueName: \"kubernetes.io/projected/0d3ea232-36aa-48a2-b2d4-449767fd61fb-kube-api-access-l8l42\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.982000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-scripts\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.982295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-config-data\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.982439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-fernet-keys\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.982764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-combined-ca-bundle\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.982928 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-credential-keys\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:12 crc kubenswrapper[4805]: I0216 21:16:12.985143 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8l42\" (UniqueName: \"kubernetes.io/projected/0d3ea232-36aa-48a2-b2d4-449767fd61fb-kube-api-access-l8l42\") pod \"keystone-bootstrap-2255v\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:13 crc kubenswrapper[4805]: I0216 21:16:13.042374 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:13 crc kubenswrapper[4805]: I0216 21:16:13.613322 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a21f2059-e2d3-46c7-bbf9-2a285e4d1662" path="/var/lib/kubelet/pods/a21f2059-e2d3-46c7-bbf9-2a285e4d1662/volumes" Feb 16 21:16:16 crc kubenswrapper[4805]: I0216 21:16:16.819157 4805 generic.go:334] "Generic (PLEG): container finished" podID="1050edad-f277-4299-ab1d-c812bc4ae573" containerID="41bab549e938c4c3e30c40fb4e65fdb531eb5ac78d3d654fbf2763c1e4b63392" exitCode=0 Feb 16 21:16:16 crc kubenswrapper[4805]: I0216 21:16:16.819270 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8kxrn" event={"ID":"1050edad-f277-4299-ab1d-c812bc4ae573","Type":"ContainerDied","Data":"41bab549e938c4c3e30c40fb4e65fdb531eb5ac78d3d654fbf2763c1e4b63392"} Feb 16 21:16:18 crc kubenswrapper[4805]: I0216 21:16:18.580045 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 16 21:16:21 crc kubenswrapper[4805]: E0216 21:16:21.269613 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 16 21:16:21 crc kubenswrapper[4805]: E0216 21:16:21.270246 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdkf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-txbxn_openstack(ab6c7759-7bcf-4efa-b50f-b73e87f20842): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:16:21 crc kubenswrapper[4805]: E0216 21:16:21.271529 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-txbxn" podUID="ab6c7759-7bcf-4efa-b50f-b73e87f20842" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.362862 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.490957 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-httpd-run\") pod \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.491084 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-logs\") pod \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.491198 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-scripts\") pod \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.491277 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-config-data\") pod \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.491312 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-public-tls-certs\") pod \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.491348 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nxnk\" (UniqueName: \"kubernetes.io/projected/7ff87923-c71a-4c5e-9c05-6c06608e7e27-kube-api-access-4nxnk\") pod \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.491519 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.491659 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-combined-ca-bundle\") pod \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\" (UID: \"7ff87923-c71a-4c5e-9c05-6c06608e7e27\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.491845 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7ff87923-c71a-4c5e-9c05-6c06608e7e27" (UID: "7ff87923-c71a-4c5e-9c05-6c06608e7e27"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.491886 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-logs" (OuterVolumeSpecName: "logs") pod "7ff87923-c71a-4c5e-9c05-6c06608e7e27" (UID: "7ff87923-c71a-4c5e-9c05-6c06608e7e27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.492543 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.492586 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff87923-c71a-4c5e-9c05-6c06608e7e27-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.498543 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-scripts" (OuterVolumeSpecName: "scripts") pod "7ff87923-c71a-4c5e-9c05-6c06608e7e27" (UID: "7ff87923-c71a-4c5e-9c05-6c06608e7e27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.499275 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff87923-c71a-4c5e-9c05-6c06608e7e27-kube-api-access-4nxnk" (OuterVolumeSpecName: "kube-api-access-4nxnk") pod "7ff87923-c71a-4c5e-9c05-6c06608e7e27" (UID: "7ff87923-c71a-4c5e-9c05-6c06608e7e27"). InnerVolumeSpecName "kube-api-access-4nxnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.514801 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa" (OuterVolumeSpecName: "glance") pod "7ff87923-c71a-4c5e-9c05-6c06608e7e27" (UID: "7ff87923-c71a-4c5e-9c05-6c06608e7e27"). InnerVolumeSpecName "pvc-f93d250e-b474-4652-90b3-558818d0e8aa". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.563908 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-config-data" (OuterVolumeSpecName: "config-data") pod "7ff87923-c71a-4c5e-9c05-6c06608e7e27" (UID: "7ff87923-c71a-4c5e-9c05-6c06608e7e27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.565794 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ff87923-c71a-4c5e-9c05-6c06608e7e27" (UID: "7ff87923-c71a-4c5e-9c05-6c06608e7e27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.572927 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7ff87923-c71a-4c5e-9c05-6c06608e7e27" (UID: "7ff87923-c71a-4c5e-9c05-6c06608e7e27"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.597001 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.597031 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.597042 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nxnk\" (UniqueName: \"kubernetes.io/projected/7ff87923-c71a-4c5e-9c05-6c06608e7e27-kube-api-access-4nxnk\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.597066 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") on node \"crc\" " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.597077 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.597086 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff87923-c71a-4c5e-9c05-6c06608e7e27-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.627132 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.627266 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f93d250e-b474-4652-90b3-558818d0e8aa" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa") on node "crc" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.700237 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.802292 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:16:21 crc kubenswrapper[4805]: E0216 21:16:21.809340 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 16 21:16:21 crc kubenswrapper[4805]: E0216 21:16:21.809472 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qxl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qs466_openstack(fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:16:21 crc kubenswrapper[4805]: E0216 21:16:21.810633 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-qs466" podUID="fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.814761 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.874617 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903289 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ztlv\" (UniqueName: \"kubernetes.io/projected/b3c33787-c8a9-46fa-b932-cedfe284d377-kube-api-access-6ztlv\") pod \"b3c33787-c8a9-46fa-b932-cedfe284d377\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903368 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-combined-ca-bundle\") pod \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903389 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spdfk\" (UniqueName: \"kubernetes.io/projected/3584cba2-4c2c-4779-b7ff-80ec20f500cb-kube-api-access-spdfk\") pod \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903449 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-svc\") pod \"b3c33787-c8a9-46fa-b932-cedfe284d377\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903669 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903744 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-httpd-run\") pod \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903858 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-internal-tls-certs\") pod \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903919 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-sb\") pod \"b3c33787-c8a9-46fa-b932-cedfe284d377\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903950 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-nb\") pod \"b3c33787-c8a9-46fa-b932-cedfe284d377\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.903970 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-swift-storage-0\") pod \"b3c33787-c8a9-46fa-b932-cedfe284d377\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.904007 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-logs\") pod \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.904047 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-config-data\") pod \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.904079 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-config\") pod \"b3c33787-c8a9-46fa-b932-cedfe284d377\" (UID: \"b3c33787-c8a9-46fa-b932-cedfe284d377\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.904109 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-scripts\") pod \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\" (UID: \"3584cba2-4c2c-4779-b7ff-80ec20f500cb\") " Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.908621 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-scripts" (OuterVolumeSpecName: "scripts") pod "3584cba2-4c2c-4779-b7ff-80ec20f500cb" (UID: "3584cba2-4c2c-4779-b7ff-80ec20f500cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.908872 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-logs" (OuterVolumeSpecName: "logs") pod "3584cba2-4c2c-4779-b7ff-80ec20f500cb" (UID: "3584cba2-4c2c-4779-b7ff-80ec20f500cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.909897 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c33787-c8a9-46fa-b932-cedfe284d377-kube-api-access-6ztlv" (OuterVolumeSpecName: "kube-api-access-6ztlv") pod "b3c33787-c8a9-46fa-b932-cedfe284d377" (UID: "b3c33787-c8a9-46fa-b932-cedfe284d377"). InnerVolumeSpecName "kube-api-access-6ztlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.916418 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3584cba2-4c2c-4779-b7ff-80ec20f500cb" (UID: "3584cba2-4c2c-4779-b7ff-80ec20f500cb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.935026 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3584cba2-4c2c-4779-b7ff-80ec20f500cb-kube-api-access-spdfk" (OuterVolumeSpecName: "kube-api-access-spdfk") pod "3584cba2-4c2c-4779-b7ff-80ec20f500cb" (UID: "3584cba2-4c2c-4779-b7ff-80ec20f500cb"). InnerVolumeSpecName "kube-api-access-spdfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.943395 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12" (OuterVolumeSpecName: "glance") pod "3584cba2-4c2c-4779-b7ff-80ec20f500cb" (UID: "3584cba2-4c2c-4779-b7ff-80ec20f500cb"). InnerVolumeSpecName "pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.945200 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8kxrn" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.945387 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8kxrn" event={"ID":"1050edad-f277-4299-ab1d-c812bc4ae573","Type":"ContainerDied","Data":"8179dff601be917e1498c853ef705204e88d40b3c2e946b6f7343b951304abd1"} Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.945457 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8179dff601be917e1498c853ef705204e88d40b3c2e946b6f7343b951304abd1" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.949285 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" event={"ID":"b3c33787-c8a9-46fa-b932-cedfe284d377","Type":"ContainerDied","Data":"90188d774ba2f596a6f8365f7e4c2804532fd216d144f7983a9f0cfe848706f2"} Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.949334 4805 scope.go:117] "RemoveContainer" containerID="5f89293ae1efb0c2dd0ceae3c7f8e96b41213720ec9e330d4f11df65222d7ac7" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.949478 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.955445 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ff87923-c71a-4c5e-9c05-6c06608e7e27","Type":"ContainerDied","Data":"95f238ce2a30422da75f7fe627756beab39bf5ac501eced4db1400b55b4342db"} Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.955552 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.961176 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.961239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3584cba2-4c2c-4779-b7ff-80ec20f500cb","Type":"ContainerDied","Data":"11fee0ef2f775a1766c59fd5342722e0b12759f6f0b1194c93a7eb0e0c71cac6"} Feb 16 21:16:21 crc kubenswrapper[4805]: E0216 21:16:21.962933 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-txbxn" podUID="ab6c7759-7bcf-4efa-b50f-b73e87f20842" Feb 16 21:16:21 crc kubenswrapper[4805]: E0216 21:16:21.963106 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-qs466" podUID="fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.964813 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3584cba2-4c2c-4779-b7ff-80ec20f500cb" (UID: "3584cba2-4c2c-4779-b7ff-80ec20f500cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:21 crc kubenswrapper[4805]: I0216 21:16:21.990275 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b3c33787-c8a9-46fa-b932-cedfe284d377" (UID: "b3c33787-c8a9-46fa-b932-cedfe284d377"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.006949 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fppn6\" (UniqueName: \"kubernetes.io/projected/1050edad-f277-4299-ab1d-c812bc4ae573-kube-api-access-fppn6\") pod \"1050edad-f277-4299-ab1d-c812bc4ae573\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.008299 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-combined-ca-bundle\") pod \"1050edad-f277-4299-ab1d-c812bc4ae573\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.008335 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-config\") pod \"1050edad-f277-4299-ab1d-c812bc4ae573\" (UID: \"1050edad-f277-4299-ab1d-c812bc4ae573\") " Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.009114 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.009133 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.009144 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.009153 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ztlv\" (UniqueName: \"kubernetes.io/projected/b3c33787-c8a9-46fa-b932-cedfe284d377-kube-api-access-6ztlv\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.009164 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.009176 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spdfk\" (UniqueName: \"kubernetes.io/projected/3584cba2-4c2c-4779-b7ff-80ec20f500cb-kube-api-access-spdfk\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.009199 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") on node \"crc\" " Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.009210 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3584cba2-4c2c-4779-b7ff-80ec20f500cb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.015089 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1050edad-f277-4299-ab1d-c812bc4ae573-kube-api-access-fppn6" (OuterVolumeSpecName: "kube-api-access-fppn6") pod "1050edad-f277-4299-ab1d-c812bc4ae573" (UID: "1050edad-f277-4299-ab1d-c812bc4ae573"). InnerVolumeSpecName "kube-api-access-fppn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.015089 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3584cba2-4c2c-4779-b7ff-80ec20f500cb" (UID: "3584cba2-4c2c-4779-b7ff-80ec20f500cb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.016193 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-config-data" (OuterVolumeSpecName: "config-data") pod "3584cba2-4c2c-4779-b7ff-80ec20f500cb" (UID: "3584cba2-4c2c-4779-b7ff-80ec20f500cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.024352 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b3c33787-c8a9-46fa-b932-cedfe284d377" (UID: "b3c33787-c8a9-46fa-b932-cedfe284d377"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.027849 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.054047 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b3c33787-c8a9-46fa-b932-cedfe284d377" (UID: "b3c33787-c8a9-46fa-b932-cedfe284d377"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.055297 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.055887 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1050edad-f277-4299-ab1d-c812bc4ae573" (UID: "1050edad-f277-4299-ab1d-c812bc4ae573"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.060511 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-config" (OuterVolumeSpecName: "config") pod "b3c33787-c8a9-46fa-b932-cedfe284d377" (UID: "b3c33787-c8a9-46fa-b932-cedfe284d377"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.072360 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3c33787-c8a9-46fa-b932-cedfe284d377" (UID: "b3c33787-c8a9-46fa-b932-cedfe284d377"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.074578 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-config" (OuterVolumeSpecName: "config") pod "1050edad-f277-4299-ab1d-c812bc4ae573" (UID: "1050edad-f277-4299-ab1d-c812bc4ae573"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.079764 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:16:22 crc kubenswrapper[4805]: E0216 21:16:22.080285 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1050edad-f277-4299-ab1d-c812bc4ae573" containerName="neutron-db-sync" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.080380 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1050edad-f277-4299-ab1d-c812bc4ae573" containerName="neutron-db-sync" Feb 16 21:16:22 crc kubenswrapper[4805]: E0216 21:16:22.080449 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerName="init" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.091550 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerName="init" Feb 16 21:16:22 crc kubenswrapper[4805]: E0216 21:16:22.091647 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerName="glance-httpd" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.091697 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerName="glance-httpd" Feb 16 21:16:22 crc kubenswrapper[4805]: E0216 21:16:22.091815 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerName="glance-log" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.091873 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerName="glance-log" Feb 16 21:16:22 crc kubenswrapper[4805]: E0216 21:16:22.091948 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerName="glance-httpd" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.091998 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerName="glance-httpd" Feb 16 21:16:22 crc kubenswrapper[4805]: E0216 21:16:22.092052 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerName="dnsmasq-dns" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.092104 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerName="dnsmasq-dns" Feb 16 21:16:22 crc kubenswrapper[4805]: E0216 21:16:22.092158 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerName="glance-log" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.092206 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerName="glance-log" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.092604 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerName="glance-httpd" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.092671 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1050edad-f277-4299-ab1d-c812bc4ae573" containerName="neutron-db-sync" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.081061 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.092819 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerName="dnsmasq-dns" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.092880 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerName="glance-log" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.092933 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" containerName="glance-log" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.092988 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" containerName="glance-httpd" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.093070 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12") on node "crc" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.094217 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.096431 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.096601 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.106264 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111363 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111447 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111467 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111513 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111526 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111537 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1050edad-f277-4299-ab1d-c812bc4ae573-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111548 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111560 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3584cba2-4c2c-4779-b7ff-80ec20f500cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111570 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3c33787-c8a9-46fa-b932-cedfe284d377-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.111580 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fppn6\" (UniqueName: \"kubernetes.io/projected/1050edad-f277-4299-ab1d-c812bc4ae573-kube-api-access-fppn6\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.213737 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.213840 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.213900 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-logs\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.213955 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnmr9\" (UniqueName: \"kubernetes.io/projected/7352be72-3bf9-4377-a713-ab6058b6785f-kube-api-access-gnmr9\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.214003 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-config-data\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.214023 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.214052 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-scripts\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.214244 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.316746 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.317167 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.317214 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-logs\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.317272 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnmr9\" (UniqueName: \"kubernetes.io/projected/7352be72-3bf9-4377-a713-ab6058b6785f-kube-api-access-gnmr9\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.317324 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-config-data\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.317346 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.317375 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-scripts\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.317424 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.317782 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.317781 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-logs\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.321859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-scripts\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.322044 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.322162 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.322195 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4cb8b45edeb659dc9877cd079ad06833b4a4e61f890a1a00cd5e71596d9e0ea/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.323753 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.327677 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-config-data\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.341462 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnmr9\" (UniqueName: \"kubernetes.io/projected/7352be72-3bf9-4377-a713-ab6058b6785f-kube-api-access-gnmr9\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.368429 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.413087 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.421276 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-87s5g"] Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.438146 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-87s5g"] Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.453060 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.469062 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.480773 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.482599 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.484566 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.485102 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.489615 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.627848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.627906 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.627931 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.628008 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.628043 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zckgc\" (UniqueName: \"kubernetes.io/projected/33763587-13b0-4c1c-af15-3164866a25aa-kube-api-access-zckgc\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.628058 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.628075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.628131 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-logs\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.729679 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-logs\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.729860 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.729893 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.729916 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.729940 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.729982 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zckgc\" (UniqueName: \"kubernetes.io/projected/33763587-13b0-4c1c-af15-3164866a25aa-kube-api-access-zckgc\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.730001 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.730020 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.730381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-logs\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.731212 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.733463 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.733504 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/13fbba481ac34178d672430e609e409da28aa7e56b577de46c4337378ecf394e/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.736333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.737454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.738571 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.745506 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.745686 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zckgc\" (UniqueName: \"kubernetes.io/projected/33763587-13b0-4c1c-af15-3164866a25aa-kube-api-access-zckgc\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.784951 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:16:22 crc kubenswrapper[4805]: I0216 21:16:22.806284 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.211236 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rmrmq"] Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.215196 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.234189 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rmrmq"] Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.342081 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.342169 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-svc\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.342259 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nsxg\" (UniqueName: \"kubernetes.io/projected/6b70cf49-b5fd-4814-87ef-e22b1b820066-kube-api-access-9nsxg\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.342289 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.342324 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.342343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-config\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.348545 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8d5dc9954-x56z5"] Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.353059 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.355642 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-s66bn" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.356058 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.356078 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.356224 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.382622 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8d5dc9954-x56z5"] Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79x8\" (UniqueName: \"kubernetes.io/projected/37e4f0f1-8158-409b-95a0-12826bddebc2-kube-api-access-h79x8\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447155 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-config\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447220 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-svc\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447286 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nsxg\" (UniqueName: \"kubernetes.io/projected/6b70cf49-b5fd-4814-87ef-e22b1b820066-kube-api-access-9nsxg\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447306 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-ovndb-tls-certs\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447325 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447339 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-httpd-config\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447358 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-combined-ca-bundle\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447386 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.447405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-config\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.448322 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-config\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.449189 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.449450 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-svc\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.449460 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.449696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.468507 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nsxg\" (UniqueName: \"kubernetes.io/projected/6b70cf49-b5fd-4814-87ef-e22b1b820066-kube-api-access-9nsxg\") pod \"dnsmasq-dns-6b7b667979-rmrmq\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.536805 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.549129 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-ovndb-tls-certs\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.549173 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-httpd-config\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.549196 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-combined-ca-bundle\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.549280 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79x8\" (UniqueName: \"kubernetes.io/projected/37e4f0f1-8158-409b-95a0-12826bddebc2-kube-api-access-h79x8\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.549312 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-config\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.551478 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.551565 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.551565 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.554670 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-combined-ca-bundle\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.565754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-ovndb-tls-certs\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.565955 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-httpd-config\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.568216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-config\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.580959 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-87s5g" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.581603 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79x8\" (UniqueName: \"kubernetes.io/projected/37e4f0f1-8158-409b-95a0-12826bddebc2-kube-api-access-h79x8\") pod \"neutron-8d5dc9954-x56z5\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.629469 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3584cba2-4c2c-4779-b7ff-80ec20f500cb" path="/var/lib/kubelet/pods/3584cba2-4c2c-4779-b7ff-80ec20f500cb/volumes" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.630316 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff87923-c71a-4c5e-9c05-6c06608e7e27" path="/var/lib/kubelet/pods/7ff87923-c71a-4c5e-9c05-6c06608e7e27/volumes" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.633822 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c33787-c8a9-46fa-b932-cedfe284d377" path="/var/lib/kubelet/pods/b3c33787-c8a9-46fa-b932-cedfe284d377/volumes" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.685567 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-s66bn" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.694576 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:23 crc kubenswrapper[4805]: E0216 21:16:23.870615 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 21:16:23 crc kubenswrapper[4805]: E0216 21:16:23.870791 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zxq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9ms99_openstack(c8125a07-0bfb-4381-80e2-bf5bb1525026): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:16:23 crc kubenswrapper[4805]: I0216 21:16:23.871557 4805 scope.go:117] "RemoveContainer" containerID="f1709a892c6be6cb72c430684a69a3091687e107139b4f49a46c3fb4744ca95e" Feb 16 21:16:23 crc kubenswrapper[4805]: E0216 21:16:23.871942 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9ms99" podUID="c8125a07-0bfb-4381-80e2-bf5bb1525026" Feb 16 21:16:23 crc kubenswrapper[4805]: E0216 21:16:23.997111 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9ms99" podUID="c8125a07-0bfb-4381-80e2-bf5bb1525026" Feb 16 21:16:24 crc kubenswrapper[4805]: I0216 21:16:24.025116 4805 scope.go:117] "RemoveContainer" containerID="a9ca9238e6eccae743f110691e55c5804a839bbecc530e4d088bec0672621933" Feb 16 21:16:24 crc kubenswrapper[4805]: I0216 21:16:24.149322 4805 scope.go:117] "RemoveContainer" containerID="5587dcc56d5574a23616f83c1d7d96e31faf12b06481ee5c59f67582403a9bf7" Feb 16 21:16:24 crc kubenswrapper[4805]: I0216 21:16:24.232711 4805 scope.go:117] "RemoveContainer" containerID="77ec55563c3030b53e8f84c227e221be4f879f4c75dd91cb646236089f56ad49" Feb 16 21:16:24 crc kubenswrapper[4805]: I0216 21:16:24.285859 4805 scope.go:117] "RemoveContainer" containerID="414885d3bfdd5fdbd02b0a63092f3109aa6d734a9272d1ba83b28dc7867b6cc3" Feb 16 21:16:24 crc kubenswrapper[4805]: I0216 21:16:24.343172 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2255v"] Feb 16 21:16:24 crc kubenswrapper[4805]: I0216 21:16:24.383441 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:16:24 crc kubenswrapper[4805]: I0216 21:16:24.663427 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rmrmq"] Feb 16 21:16:24 crc kubenswrapper[4805]: I0216 21:16:24.748269 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8d5dc9954-x56z5"] Feb 16 21:16:24 crc kubenswrapper[4805]: I0216 21:16:24.831664 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.029148 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.030098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" event={"ID":"6b70cf49-b5fd-4814-87ef-e22b1b820066","Type":"ContainerStarted","Data":"88564d5a9da1fe901b4cd43a529064a813fe7dd25cd0f577a5c3eb4fd09887f2"} Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.030143 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" event={"ID":"6b70cf49-b5fd-4814-87ef-e22b1b820066","Type":"ContainerStarted","Data":"6ad5bc3bf575b5b2e331577fe197dda1ae6ecb3e20d65e5fe9da49137118b05d"} Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.034678 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7352be72-3bf9-4377-a713-ab6058b6785f","Type":"ContainerStarted","Data":"521d6f18f7391ae47914d8f4a7e39939a6af53348906124f106f5434a65b2153"} Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.036842 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z62rr" event={"ID":"d50fc8fa-34b3-48cf-9e68-c474509271a3","Type":"ContainerStarted","Data":"edbc542b53d489b3887a95d9fd439f2cee6f62e9b4f31defc68635d2df88f123"} Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.049991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2255v" event={"ID":"0d3ea232-36aa-48a2-b2d4-449767fd61fb","Type":"ContainerStarted","Data":"d86b238e2ac4de09a850d02bf51b519f4562c068a1b77efdb37d56c2b91568be"} Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.050038 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2255v" event={"ID":"0d3ea232-36aa-48a2-b2d4-449767fd61fb","Type":"ContainerStarted","Data":"5cca530433c7bb7f72d07c27a5f294343d9a2515676a64e522c812e37f9bca6a"} Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.051635 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerStarted","Data":"2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0"} Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.055284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d5dc9954-x56z5" event={"ID":"37e4f0f1-8158-409b-95a0-12826bddebc2","Type":"ContainerStarted","Data":"fbc1cb2a207a7c02dfd363eb47219dea9ac28f33a3bba0ccbe0b055f8805a49c"} Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.120912 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-z62rr" podStartSLOduration=4.84882095 podStartE2EDuration="31.120892283s" podCreationTimestamp="2026-02-16 21:15:54 +0000 UTC" firstStartedPulling="2026-02-16 21:15:55.519565038 +0000 UTC m=+1173.338248333" lastFinishedPulling="2026-02-16 21:16:21.791636381 +0000 UTC m=+1199.610319666" observedRunningTime="2026-02-16 21:16:25.074314922 +0000 UTC m=+1202.892998227" watchObservedRunningTime="2026-02-16 21:16:25.120892283 +0000 UTC m=+1202.939575578" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.130771 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2255v" podStartSLOduration=13.130751619 podStartE2EDuration="13.130751619s" podCreationTimestamp="2026-02-16 21:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:25.119955027 +0000 UTC m=+1202.938638322" watchObservedRunningTime="2026-02-16 21:16:25.130751619 +0000 UTC m=+1202.949434914" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.839263 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-65bbdd7745-qtmzr"] Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.841398 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.844762 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.845009 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.887846 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65bbdd7745-qtmzr"] Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.937360 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-config\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.938028 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-combined-ca-bundle\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.938269 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-public-tls-certs\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.938331 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-ovndb-tls-certs\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.938364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-httpd-config\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.938533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-internal-tls-certs\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:25 crc kubenswrapper[4805]: I0216 21:16:25.938563 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mjb2\" (UniqueName: \"kubernetes.io/projected/3d9e7980-c791-44b2-a527-948c3b3b14e3-kube-api-access-5mjb2\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.044124 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-public-tls-certs\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.044204 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-ovndb-tls-certs\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.044230 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-httpd-config\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.044257 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-internal-tls-certs\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.044280 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mjb2\" (UniqueName: \"kubernetes.io/projected/3d9e7980-c791-44b2-a527-948c3b3b14e3-kube-api-access-5mjb2\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.044357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-config\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.044376 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-combined-ca-bundle\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.049776 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-combined-ca-bundle\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.050595 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-internal-tls-certs\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.053449 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-config\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.054055 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-public-tls-certs\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.054687 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-httpd-config\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.055824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-ovndb-tls-certs\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.068408 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mjb2\" (UniqueName: \"kubernetes.io/projected/3d9e7980-c791-44b2-a527-948c3b3b14e3-kube-api-access-5mjb2\") pod \"neutron-65bbdd7745-qtmzr\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.100511 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d5dc9954-x56z5" event={"ID":"37e4f0f1-8158-409b-95a0-12826bddebc2","Type":"ContainerStarted","Data":"2000ac46f12288b12f5899a913f096ea75601e47b6a656ee2c1776eba5b0c7f9"} Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.100557 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d5dc9954-x56z5" event={"ID":"37e4f0f1-8158-409b-95a0-12826bddebc2","Type":"ContainerStarted","Data":"705098690a2e635d29fb6f5dc24c44c61f6c3b4a6030bfd52b4d56c668dfd944"} Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.100596 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.127946 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8d5dc9954-x56z5" podStartSLOduration=3.127922986 podStartE2EDuration="3.127922986s" podCreationTimestamp="2026-02-16 21:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:26.11912696 +0000 UTC m=+1203.937810255" watchObservedRunningTime="2026-02-16 21:16:26.127922986 +0000 UTC m=+1203.946606281" Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.138925 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"33763587-13b0-4c1c-af15-3164866a25aa","Type":"ContainerStarted","Data":"579b13e4e98a86f06fa8ce1c9461bcbdd689cbd24875db5359c3d75ac249fae8"} Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.138967 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"33763587-13b0-4c1c-af15-3164866a25aa","Type":"ContainerStarted","Data":"3e2d2c9307982c497c45b24d96291e10f9e747f4f9b057aa0106351f5b49f757"} Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.141778 4805 generic.go:334] "Generic (PLEG): container finished" podID="6b70cf49-b5fd-4814-87ef-e22b1b820066" containerID="88564d5a9da1fe901b4cd43a529064a813fe7dd25cd0f577a5c3eb4fd09887f2" exitCode=0 Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.141926 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" event={"ID":"6b70cf49-b5fd-4814-87ef-e22b1b820066","Type":"ContainerDied","Data":"88564d5a9da1fe901b4cd43a529064a813fe7dd25cd0f577a5c3eb4fd09887f2"} Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.149005 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7352be72-3bf9-4377-a713-ab6058b6785f","Type":"ContainerStarted","Data":"c14b3f1c62e3d753fdb59ba073293810c918aa4b0b3b65bdcb067df76e691889"} Feb 16 21:16:26 crc kubenswrapper[4805]: I0216 21:16:26.318333 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:27 crc kubenswrapper[4805]: I0216 21:16:27.162253 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"33763587-13b0-4c1c-af15-3164866a25aa","Type":"ContainerStarted","Data":"9ff579908d254c88c054d1ce4b4c1d0251e0a4fa1ca6a552c83d4f094af1a0a3"} Feb 16 21:16:27 crc kubenswrapper[4805]: I0216 21:16:27.164495 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7352be72-3bf9-4377-a713-ab6058b6785f","Type":"ContainerStarted","Data":"3086fe672dbe9a3c4c5163c1bcaa203abca8433d121980e3ed65dce2dd71734d"} Feb 16 21:16:27 crc kubenswrapper[4805]: I0216 21:16:27.179880 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.179859183 podStartE2EDuration="5.179859183s" podCreationTimestamp="2026-02-16 21:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:27.179677378 +0000 UTC m=+1204.998360673" watchObservedRunningTime="2026-02-16 21:16:27.179859183 +0000 UTC m=+1204.998542478" Feb 16 21:16:27 crc kubenswrapper[4805]: I0216 21:16:27.217297 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.217272938 podStartE2EDuration="6.217272938s" podCreationTimestamp="2026-02-16 21:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:27.207158576 +0000 UTC m=+1205.025841871" watchObservedRunningTime="2026-02-16 21:16:27.217272938 +0000 UTC m=+1205.035956233" Feb 16 21:16:27 crc kubenswrapper[4805]: I0216 21:16:27.517074 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65bbdd7745-qtmzr"] Feb 16 21:16:27 crc kubenswrapper[4805]: W0216 21:16:27.519931 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d9e7980_c791_44b2_a527_948c3b3b14e3.slice/crio-5ef474dcd3096582c92aa1e43f9a5491f93d41b1ff53d65454939859aaea1749 WatchSource:0}: Error finding container 5ef474dcd3096582c92aa1e43f9a5491f93d41b1ff53d65454939859aaea1749: Status 404 returned error can't find the container with id 5ef474dcd3096582c92aa1e43f9a5491f93d41b1ff53d65454939859aaea1749 Feb 16 21:16:28 crc kubenswrapper[4805]: I0216 21:16:28.177625 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65bbdd7745-qtmzr" event={"ID":"3d9e7980-c791-44b2-a527-948c3b3b14e3","Type":"ContainerStarted","Data":"5ef474dcd3096582c92aa1e43f9a5491f93d41b1ff53d65454939859aaea1749"} Feb 16 21:16:28 crc kubenswrapper[4805]: I0216 21:16:28.179415 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" event={"ID":"6b70cf49-b5fd-4814-87ef-e22b1b820066","Type":"ContainerStarted","Data":"1c4bd2bfff27d7f68c16a3274f8a0e3e1257d7d1ac0bc4740feb8c84a0f739df"} Feb 16 21:16:28 crc kubenswrapper[4805]: I0216 21:16:28.213229 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" podStartSLOduration=5.21320905 podStartE2EDuration="5.21320905s" podCreationTimestamp="2026-02-16 21:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:28.205629619 +0000 UTC m=+1206.024312914" watchObservedRunningTime="2026-02-16 21:16:28.21320905 +0000 UTC m=+1206.031892355" Feb 16 21:16:28 crc kubenswrapper[4805]: I0216 21:16:28.537148 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:29 crc kubenswrapper[4805]: I0216 21:16:29.205064 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerStarted","Data":"ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b"} Feb 16 21:16:29 crc kubenswrapper[4805]: I0216 21:16:29.207830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65bbdd7745-qtmzr" event={"ID":"3d9e7980-c791-44b2-a527-948c3b3b14e3","Type":"ContainerStarted","Data":"f1067e7a3ce96adf6307131efa852971e92c89e36ce22c0b1f103b0aa0ef5941"} Feb 16 21:16:29 crc kubenswrapper[4805]: I0216 21:16:29.207859 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65bbdd7745-qtmzr" event={"ID":"3d9e7980-c791-44b2-a527-948c3b3b14e3","Type":"ContainerStarted","Data":"dc891749cb57102c67cc7d93a34901a0e19601c31b5532b5a55179aae3c41186"} Feb 16 21:16:29 crc kubenswrapper[4805]: I0216 21:16:29.208052 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:29 crc kubenswrapper[4805]: I0216 21:16:29.211211 4805 generic.go:334] "Generic (PLEG): container finished" podID="d50fc8fa-34b3-48cf-9e68-c474509271a3" containerID="edbc542b53d489b3887a95d9fd439f2cee6f62e9b4f31defc68635d2df88f123" exitCode=0 Feb 16 21:16:29 crc kubenswrapper[4805]: I0216 21:16:29.211280 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z62rr" event={"ID":"d50fc8fa-34b3-48cf-9e68-c474509271a3","Type":"ContainerDied","Data":"edbc542b53d489b3887a95d9fd439f2cee6f62e9b4f31defc68635d2df88f123"} Feb 16 21:16:29 crc kubenswrapper[4805]: I0216 21:16:29.402985 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-65bbdd7745-qtmzr" podStartSLOduration=4.402946475 podStartE2EDuration="4.402946475s" podCreationTimestamp="2026-02-16 21:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:29.398790159 +0000 UTC m=+1207.217473474" watchObservedRunningTime="2026-02-16 21:16:29.402946475 +0000 UTC m=+1207.221629780" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.751283 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z62rr" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.862398 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-config-data\") pod \"d50fc8fa-34b3-48cf-9e68-c474509271a3\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.862869 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-scripts\") pod \"d50fc8fa-34b3-48cf-9e68-c474509271a3\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.862973 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50fc8fa-34b3-48cf-9e68-c474509271a3-logs\") pod \"d50fc8fa-34b3-48cf-9e68-c474509271a3\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.863058 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-combined-ca-bundle\") pod \"d50fc8fa-34b3-48cf-9e68-c474509271a3\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.863408 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crgk5\" (UniqueName: \"kubernetes.io/projected/d50fc8fa-34b3-48cf-9e68-c474509271a3-kube-api-access-crgk5\") pod \"d50fc8fa-34b3-48cf-9e68-c474509271a3\" (UID: \"d50fc8fa-34b3-48cf-9e68-c474509271a3\") " Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.863733 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d50fc8fa-34b3-48cf-9e68-c474509271a3-logs" (OuterVolumeSpecName: "logs") pod "d50fc8fa-34b3-48cf-9e68-c474509271a3" (UID: "d50fc8fa-34b3-48cf-9e68-c474509271a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.864181 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50fc8fa-34b3-48cf-9e68-c474509271a3-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.870432 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d50fc8fa-34b3-48cf-9e68-c474509271a3-kube-api-access-crgk5" (OuterVolumeSpecName: "kube-api-access-crgk5") pod "d50fc8fa-34b3-48cf-9e68-c474509271a3" (UID: "d50fc8fa-34b3-48cf-9e68-c474509271a3"). InnerVolumeSpecName "kube-api-access-crgk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.870875 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-scripts" (OuterVolumeSpecName: "scripts") pod "d50fc8fa-34b3-48cf-9e68-c474509271a3" (UID: "d50fc8fa-34b3-48cf-9e68-c474509271a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.893184 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d50fc8fa-34b3-48cf-9e68-c474509271a3" (UID: "d50fc8fa-34b3-48cf-9e68-c474509271a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.931922 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-config-data" (OuterVolumeSpecName: "config-data") pod "d50fc8fa-34b3-48cf-9e68-c474509271a3" (UID: "d50fc8fa-34b3-48cf-9e68-c474509271a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.966097 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.966127 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.966135 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50fc8fa-34b3-48cf-9e68-c474509271a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:30 crc kubenswrapper[4805]: I0216 21:16:30.966148 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crgk5\" (UniqueName: \"kubernetes.io/projected/d50fc8fa-34b3-48cf-9e68-c474509271a3-kube-api-access-crgk5\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.236713 4805 generic.go:334] "Generic (PLEG): container finished" podID="0d3ea232-36aa-48a2-b2d4-449767fd61fb" containerID="d86b238e2ac4de09a850d02bf51b519f4562c068a1b77efdb37d56c2b91568be" exitCode=0 Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.236754 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2255v" event={"ID":"0d3ea232-36aa-48a2-b2d4-449767fd61fb","Type":"ContainerDied","Data":"d86b238e2ac4de09a850d02bf51b519f4562c068a1b77efdb37d56c2b91568be"} Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.238534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z62rr" event={"ID":"d50fc8fa-34b3-48cf-9e68-c474509271a3","Type":"ContainerDied","Data":"eb48b27cb779e0199cf17f37df18f2d34c4702e71bf3cc5892869bc36cd9631e"} Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.238611 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z62rr" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.238615 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb48b27cb779e0199cf17f37df18f2d34c4702e71bf3cc5892869bc36cd9631e" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.566079 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-745445cc4d-b5chv"] Feb 16 21:16:31 crc kubenswrapper[4805]: E0216 21:16:31.566580 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d50fc8fa-34b3-48cf-9e68-c474509271a3" containerName="placement-db-sync" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.566596 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d50fc8fa-34b3-48cf-9e68-c474509271a3" containerName="placement-db-sync" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.566901 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d50fc8fa-34b3-48cf-9e68-c474509271a3" containerName="placement-db-sync" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.568449 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.581198 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.581386 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.581575 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.581682 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.586100 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-79kmv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.679304 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-745445cc4d-b5chv"] Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.690574 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dv67\" (UniqueName: \"kubernetes.io/projected/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-kube-api-access-7dv67\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.690653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-scripts\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.690753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-public-tls-certs\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.691053 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-logs\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.691084 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-config-data\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.691146 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-internal-tls-certs\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.691208 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-combined-ca-bundle\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.793035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-logs\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.793412 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-config-data\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.793446 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-internal-tls-certs\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.793472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-combined-ca-bundle\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.793690 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dv67\" (UniqueName: \"kubernetes.io/projected/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-kube-api-access-7dv67\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.793729 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-scripts\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.793771 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-public-tls-certs\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.798097 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-logs\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.798789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-public-tls-certs\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.802540 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-internal-tls-certs\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.802615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-combined-ca-bundle\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.812190 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-scripts\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.816335 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-config-data\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.826288 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dv67\" (UniqueName: \"kubernetes.io/projected/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-kube-api-access-7dv67\") pod \"placement-745445cc4d-b5chv\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:31 crc kubenswrapper[4805]: I0216 21:16:31.898144 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:32 crc kubenswrapper[4805]: I0216 21:16:32.414339 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:16:32 crc kubenswrapper[4805]: I0216 21:16:32.414402 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:16:32 crc kubenswrapper[4805]: I0216 21:16:32.471472 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:16:32 crc kubenswrapper[4805]: I0216 21:16:32.500479 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:16:32 crc kubenswrapper[4805]: I0216 21:16:32.806482 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:32 crc kubenswrapper[4805]: I0216 21:16:32.806547 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:32 crc kubenswrapper[4805]: I0216 21:16:32.854845 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:32 crc kubenswrapper[4805]: I0216 21:16:32.873151 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:33 crc kubenswrapper[4805]: I0216 21:16:33.260159 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:33 crc kubenswrapper[4805]: I0216 21:16:33.260207 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:16:33 crc kubenswrapper[4805]: I0216 21:16:33.260222 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:33 crc kubenswrapper[4805]: I0216 21:16:33.260236 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:16:33 crc kubenswrapper[4805]: I0216 21:16:33.538912 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:33 crc kubenswrapper[4805]: I0216 21:16:33.651385 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wb67p"] Feb 16 21:16:33 crc kubenswrapper[4805]: I0216 21:16:33.651664 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" podUID="eea9ce90-7516-47bc-844e-224cf41929e4" containerName="dnsmasq-dns" containerID="cri-o://ca90aef24be827db8fb2ae475ed88f2ea679d9c43700d8e5fe123bdf5da1f7fe" gracePeriod=10 Feb 16 21:16:34 crc kubenswrapper[4805]: I0216 21:16:34.274153 4805 generic.go:334] "Generic (PLEG): container finished" podID="eea9ce90-7516-47bc-844e-224cf41929e4" containerID="ca90aef24be827db8fb2ae475ed88f2ea679d9c43700d8e5fe123bdf5da1f7fe" exitCode=0 Feb 16 21:16:34 crc kubenswrapper[4805]: I0216 21:16:34.274304 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" event={"ID":"eea9ce90-7516-47bc-844e-224cf41929e4","Type":"ContainerDied","Data":"ca90aef24be827db8fb2ae475ed88f2ea679d9c43700d8e5fe123bdf5da1f7fe"} Feb 16 21:16:34 crc kubenswrapper[4805]: I0216 21:16:34.659140 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" podUID="eea9ce90-7516-47bc-844e-224cf41929e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: connect: connection refused" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.583317 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.688268 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-scripts\") pod \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.688517 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-combined-ca-bundle\") pod \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.688656 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8l42\" (UniqueName: \"kubernetes.io/projected/0d3ea232-36aa-48a2-b2d4-449767fd61fb-kube-api-access-l8l42\") pod \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.688802 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-config-data\") pod \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.689131 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-fernet-keys\") pod \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.689221 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-credential-keys\") pod \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\" (UID: \"0d3ea232-36aa-48a2-b2d4-449767fd61fb\") " Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.697840 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0d3ea232-36aa-48a2-b2d4-449767fd61fb" (UID: "0d3ea232-36aa-48a2-b2d4-449767fd61fb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.698108 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-scripts" (OuterVolumeSpecName: "scripts") pod "0d3ea232-36aa-48a2-b2d4-449767fd61fb" (UID: "0d3ea232-36aa-48a2-b2d4-449767fd61fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.699818 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0d3ea232-36aa-48a2-b2d4-449767fd61fb" (UID: "0d3ea232-36aa-48a2-b2d4-449767fd61fb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.706092 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3ea232-36aa-48a2-b2d4-449767fd61fb-kube-api-access-l8l42" (OuterVolumeSpecName: "kube-api-access-l8l42") pod "0d3ea232-36aa-48a2-b2d4-449767fd61fb" (UID: "0d3ea232-36aa-48a2-b2d4-449767fd61fb"). InnerVolumeSpecName "kube-api-access-l8l42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.763844 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-config-data" (OuterVolumeSpecName: "config-data") pod "0d3ea232-36aa-48a2-b2d4-449767fd61fb" (UID: "0d3ea232-36aa-48a2-b2d4-449767fd61fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.812051 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d3ea232-36aa-48a2-b2d4-449767fd61fb" (UID: "0d3ea232-36aa-48a2-b2d4-449767fd61fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.824121 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.824346 4805 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.824404 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.824467 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.824519 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8l42\" (UniqueName: \"kubernetes.io/projected/0d3ea232-36aa-48a2-b2d4-449767fd61fb-kube-api-access-l8l42\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.824570 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3ea232-36aa-48a2-b2d4-449767fd61fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:35 crc kubenswrapper[4805]: I0216 21:16:35.952230 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.042225 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-nb\") pod \"eea9ce90-7516-47bc-844e-224cf41929e4\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.042274 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-sb\") pod \"eea9ce90-7516-47bc-844e-224cf41929e4\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.042334 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-svc\") pod \"eea9ce90-7516-47bc-844e-224cf41929e4\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.042661 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-config\") pod \"eea9ce90-7516-47bc-844e-224cf41929e4\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.042687 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gvrl\" (UniqueName: \"kubernetes.io/projected/eea9ce90-7516-47bc-844e-224cf41929e4-kube-api-access-9gvrl\") pod \"eea9ce90-7516-47bc-844e-224cf41929e4\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.042795 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-swift-storage-0\") pod \"eea9ce90-7516-47bc-844e-224cf41929e4\" (UID: \"eea9ce90-7516-47bc-844e-224cf41929e4\") " Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.053860 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea9ce90-7516-47bc-844e-224cf41929e4-kube-api-access-9gvrl" (OuterVolumeSpecName: "kube-api-access-9gvrl") pod "eea9ce90-7516-47bc-844e-224cf41929e4" (UID: "eea9ce90-7516-47bc-844e-224cf41929e4"). InnerVolumeSpecName "kube-api-access-9gvrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.128989 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eea9ce90-7516-47bc-844e-224cf41929e4" (UID: "eea9ce90-7516-47bc-844e-224cf41929e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.130781 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-config" (OuterVolumeSpecName: "config") pod "eea9ce90-7516-47bc-844e-224cf41929e4" (UID: "eea9ce90-7516-47bc-844e-224cf41929e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.136859 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eea9ce90-7516-47bc-844e-224cf41929e4" (UID: "eea9ce90-7516-47bc-844e-224cf41929e4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.138295 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-745445cc4d-b5chv"] Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.145148 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.145181 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gvrl\" (UniqueName: \"kubernetes.io/projected/eea9ce90-7516-47bc-844e-224cf41929e4-kube-api-access-9gvrl\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.145191 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.145200 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.148299 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eea9ce90-7516-47bc-844e-224cf41929e4" (UID: "eea9ce90-7516-47bc-844e-224cf41929e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:36 crc kubenswrapper[4805]: W0216 21:16:36.148647 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod381367ab_4da1_4ed6_bdcb_9d68c4f7e4dd.slice/crio-38232e8be4915c57abe56e232b464989cc2ee720a033ccfef86fa27bb319e931 WatchSource:0}: Error finding container 38232e8be4915c57abe56e232b464989cc2ee720a033ccfef86fa27bb319e931: Status 404 returned error can't find the container with id 38232e8be4915c57abe56e232b464989cc2ee720a033ccfef86fa27bb319e931 Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.150818 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eea9ce90-7516-47bc-844e-224cf41929e4" (UID: "eea9ce90-7516-47bc-844e-224cf41929e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.246327 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.246357 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea9ce90-7516-47bc-844e-224cf41929e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.311671 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2255v" event={"ID":"0d3ea232-36aa-48a2-b2d4-449767fd61fb","Type":"ContainerDied","Data":"5cca530433c7bb7f72d07c27a5f294343d9a2515676a64e522c812e37f9bca6a"} Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.311707 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cca530433c7bb7f72d07c27a5f294343d9a2515676a64e522c812e37f9bca6a" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.311770 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2255v" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.320699 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerStarted","Data":"8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685"} Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.333285 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" event={"ID":"eea9ce90-7516-47bc-844e-224cf41929e4","Type":"ContainerDied","Data":"e5ea28d6a8170d34d961f205b19a5fc0130b66cc00338ad41ee3ffabf49ff5ed"} Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.333333 4805 scope.go:117] "RemoveContainer" containerID="ca90aef24be827db8fb2ae475ed88f2ea679d9c43700d8e5fe123bdf5da1f7fe" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.333451 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wb67p" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.341440 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-745445cc4d-b5chv" event={"ID":"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd","Type":"ContainerStarted","Data":"38232e8be4915c57abe56e232b464989cc2ee720a033ccfef86fa27bb319e931"} Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.343178 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-txbxn" event={"ID":"ab6c7759-7bcf-4efa-b50f-b73e87f20842","Type":"ContainerStarted","Data":"c11f193ec70d131b65472c4c2ee5963c85c45e4a264e6ccff7cb89ae04ac4ac8"} Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.375234 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-txbxn" podStartSLOduration=2.964602866 podStartE2EDuration="43.375218383s" podCreationTimestamp="2026-02-16 21:15:53 +0000 UTC" firstStartedPulling="2026-02-16 21:15:55.244885967 +0000 UTC m=+1173.063569262" lastFinishedPulling="2026-02-16 21:16:35.655501484 +0000 UTC m=+1213.474184779" observedRunningTime="2026-02-16 21:16:36.359053901 +0000 UTC m=+1214.177737196" watchObservedRunningTime="2026-02-16 21:16:36.375218383 +0000 UTC m=+1214.193901678" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.403221 4805 scope.go:117] "RemoveContainer" containerID="9dd86db6be87deef5f71a186cce137063380fd472f167fd879141be139d1371f" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.403469 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wb67p"] Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.423251 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wb67p"] Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.783390 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.783484 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.825182 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-747f5c598c-x2pl7"] Feb 16 21:16:36 crc kubenswrapper[4805]: E0216 21:16:36.825636 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3ea232-36aa-48a2-b2d4-449767fd61fb" containerName="keystone-bootstrap" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.825652 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3ea232-36aa-48a2-b2d4-449767fd61fb" containerName="keystone-bootstrap" Feb 16 21:16:36 crc kubenswrapper[4805]: E0216 21:16:36.825672 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea9ce90-7516-47bc-844e-224cf41929e4" containerName="dnsmasq-dns" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.825679 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea9ce90-7516-47bc-844e-224cf41929e4" containerName="dnsmasq-dns" Feb 16 21:16:36 crc kubenswrapper[4805]: E0216 21:16:36.825693 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea9ce90-7516-47bc-844e-224cf41929e4" containerName="init" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.825699 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea9ce90-7516-47bc-844e-224cf41929e4" containerName="init" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.825924 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea9ce90-7516-47bc-844e-224cf41929e4" containerName="dnsmasq-dns" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.825951 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3ea232-36aa-48a2-b2d4-449767fd61fb" containerName="keystone-bootstrap" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.826645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.843108 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.843442 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xc6z5" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.843608 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.843751 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.844029 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.847010 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.847166 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-747f5c598c-x2pl7"] Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.921273 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.921378 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.926532 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.963534 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxq2g\" (UniqueName: \"kubernetes.io/projected/52d69bb9-6a6c-4f70-8319-730e54f0e66a-kube-api-access-cxq2g\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.963626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-public-tls-certs\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.963659 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-combined-ca-bundle\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.963682 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-config-data\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.963828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-fernet-keys\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.963851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-credential-keys\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.963874 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-internal-tls-certs\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:36 crc kubenswrapper[4805]: I0216 21:16:36.963907 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-scripts\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.065767 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-config-data\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.065900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-fernet-keys\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.065933 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-credential-keys\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.065956 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-internal-tls-certs\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.065991 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-scripts\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.066059 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxq2g\" (UniqueName: \"kubernetes.io/projected/52d69bb9-6a6c-4f70-8319-730e54f0e66a-kube-api-access-cxq2g\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.066102 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-public-tls-certs\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.066121 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-combined-ca-bundle\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.100638 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-combined-ca-bundle\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.100742 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-scripts\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.107216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-credential-keys\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.107335 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-fernet-keys\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.107426 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-public-tls-certs\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.109475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-config-data\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.111450 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxq2g\" (UniqueName: \"kubernetes.io/projected/52d69bb9-6a6c-4f70-8319-730e54f0e66a-kube-api-access-cxq2g\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.112578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d69bb9-6a6c-4f70-8319-730e54f0e66a-internal-tls-certs\") pod \"keystone-747f5c598c-x2pl7\" (UID: \"52d69bb9-6a6c-4f70-8319-730e54f0e66a\") " pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.149349 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.279256 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.362161 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-745445cc4d-b5chv" event={"ID":"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd","Type":"ContainerStarted","Data":"849a536c17e68d02ab84de54c9406ad3e7da63f7d98a5b0728e1904ccf580c7e"} Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.362200 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-745445cc4d-b5chv" event={"ID":"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd","Type":"ContainerStarted","Data":"7abd4a6d8d53b28c9f2baa9f0f3385d12b4e6fc2ac0992220519b15a94f3ba4c"} Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.363862 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.363894 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.372842 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ms99" event={"ID":"c8125a07-0bfb-4381-80e2-bf5bb1525026","Type":"ContainerStarted","Data":"27c6504c014c38dd152e300c2982f98088bd369b700b8093230c75b2bb377dac"} Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.389170 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qs466" event={"ID":"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b","Type":"ContainerStarted","Data":"1dc8ebced911595effdfb46769b8f5a1816b37d485173fe40b8a24be6cdc4f14"} Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.394259 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-745445cc4d-b5chv" podStartSLOduration=6.39424193 podStartE2EDuration="6.39424193s" podCreationTimestamp="2026-02-16 21:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:37.38777972 +0000 UTC m=+1215.206463015" watchObservedRunningTime="2026-02-16 21:16:37.39424193 +0000 UTC m=+1215.212925225" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.433104 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9ms99" podStartSLOduration=3.547573427 podStartE2EDuration="44.433084115s" podCreationTimestamp="2026-02-16 21:15:53 +0000 UTC" firstStartedPulling="2026-02-16 21:15:55.241504743 +0000 UTC m=+1173.060188038" lastFinishedPulling="2026-02-16 21:16:36.127015431 +0000 UTC m=+1213.945698726" observedRunningTime="2026-02-16 21:16:37.408895359 +0000 UTC m=+1215.227578654" watchObservedRunningTime="2026-02-16 21:16:37.433084115 +0000 UTC m=+1215.251767410" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.449291 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-qs466" podStartSLOduration=3.288684007 podStartE2EDuration="44.449271147s" podCreationTimestamp="2026-02-16 21:15:53 +0000 UTC" firstStartedPulling="2026-02-16 21:15:54.964746194 +0000 UTC m=+1172.783429489" lastFinishedPulling="2026-02-16 21:16:36.125333334 +0000 UTC m=+1213.944016629" observedRunningTime="2026-02-16 21:16:37.429781173 +0000 UTC m=+1215.248464468" watchObservedRunningTime="2026-02-16 21:16:37.449271147 +0000 UTC m=+1215.267954442" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.618882 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea9ce90-7516-47bc-844e-224cf41929e4" path="/var/lib/kubelet/pods/eea9ce90-7516-47bc-844e-224cf41929e4/volumes" Feb 16 21:16:37 crc kubenswrapper[4805]: I0216 21:16:37.722278 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-747f5c598c-x2pl7"] Feb 16 21:16:38 crc kubenswrapper[4805]: I0216 21:16:38.408092 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-747f5c598c-x2pl7" event={"ID":"52d69bb9-6a6c-4f70-8319-730e54f0e66a","Type":"ContainerStarted","Data":"418f8586fb895df2d79d921d50ba9bbb46f3ebbf547a509922ef07c059348aa6"} Feb 16 21:16:38 crc kubenswrapper[4805]: I0216 21:16:38.408674 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-747f5c598c-x2pl7" event={"ID":"52d69bb9-6a6c-4f70-8319-730e54f0e66a","Type":"ContainerStarted","Data":"c86320c880f28ffe8ec4b4f4860c65ab04dc8dec824a06286c29cbafe15aae95"} Feb 16 21:16:38 crc kubenswrapper[4805]: I0216 21:16:38.408713 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:16:38 crc kubenswrapper[4805]: I0216 21:16:38.426706 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-747f5c598c-x2pl7" podStartSLOduration=2.426686042 podStartE2EDuration="2.426686042s" podCreationTimestamp="2026-02-16 21:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:38.423238505 +0000 UTC m=+1216.241921800" watchObservedRunningTime="2026-02-16 21:16:38.426686042 +0000 UTC m=+1216.245369327" Feb 16 21:16:41 crc kubenswrapper[4805]: I0216 21:16:41.444962 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab6c7759-7bcf-4efa-b50f-b73e87f20842" containerID="c11f193ec70d131b65472c4c2ee5963c85c45e4a264e6ccff7cb89ae04ac4ac8" exitCode=0 Feb 16 21:16:41 crc kubenswrapper[4805]: I0216 21:16:41.445469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-txbxn" event={"ID":"ab6c7759-7bcf-4efa-b50f-b73e87f20842","Type":"ContainerDied","Data":"c11f193ec70d131b65472c4c2ee5963c85c45e4a264e6ccff7cb89ae04ac4ac8"} Feb 16 21:16:42 crc kubenswrapper[4805]: I0216 21:16:42.455004 4805 generic.go:334] "Generic (PLEG): container finished" podID="fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" containerID="1dc8ebced911595effdfb46769b8f5a1816b37d485173fe40b8a24be6cdc4f14" exitCode=0 Feb 16 21:16:42 crc kubenswrapper[4805]: I0216 21:16:42.455091 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qs466" event={"ID":"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b","Type":"ContainerDied","Data":"1dc8ebced911595effdfb46769b8f5a1816b37d485173fe40b8a24be6cdc4f14"} Feb 16 21:16:44 crc kubenswrapper[4805]: I0216 21:16:44.503437 4805 generic.go:334] "Generic (PLEG): container finished" podID="c8125a07-0bfb-4381-80e2-bf5bb1525026" containerID="27c6504c014c38dd152e300c2982f98088bd369b700b8093230c75b2bb377dac" exitCode=0 Feb 16 21:16:44 crc kubenswrapper[4805]: I0216 21:16:44.503504 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ms99" event={"ID":"c8125a07-0bfb-4381-80e2-bf5bb1525026","Type":"ContainerDied","Data":"27c6504c014c38dd152e300c2982f98088bd369b700b8093230c75b2bb377dac"} Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.024306 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qs466" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.034252 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-txbxn" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.148886 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-db-sync-config-data\") pod \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.148977 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-config-data\") pod \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.149113 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-combined-ca-bundle\") pod \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.149167 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdkf8\" (UniqueName: \"kubernetes.io/projected/ab6c7759-7bcf-4efa-b50f-b73e87f20842-kube-api-access-fdkf8\") pod \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\" (UID: \"ab6c7759-7bcf-4efa-b50f-b73e87f20842\") " Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.149212 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-combined-ca-bundle\") pod \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.149258 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qxl6\" (UniqueName: \"kubernetes.io/projected/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-kube-api-access-8qxl6\") pod \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\" (UID: \"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b\") " Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.156203 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6c7759-7bcf-4efa-b50f-b73e87f20842-kube-api-access-fdkf8" (OuterVolumeSpecName: "kube-api-access-fdkf8") pod "ab6c7759-7bcf-4efa-b50f-b73e87f20842" (UID: "ab6c7759-7bcf-4efa-b50f-b73e87f20842"). InnerVolumeSpecName "kube-api-access-fdkf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.156272 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-kube-api-access-8qxl6" (OuterVolumeSpecName: "kube-api-access-8qxl6") pod "fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" (UID: "fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b"). InnerVolumeSpecName "kube-api-access-8qxl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.157442 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ab6c7759-7bcf-4efa-b50f-b73e87f20842" (UID: "ab6c7759-7bcf-4efa-b50f-b73e87f20842"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.182261 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab6c7759-7bcf-4efa-b50f-b73e87f20842" (UID: "ab6c7759-7bcf-4efa-b50f-b73e87f20842"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.188509 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" (UID: "fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.250683 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-config-data" (OuterVolumeSpecName: "config-data") pod "fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" (UID: "fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.251406 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.251435 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.251446 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdkf8\" (UniqueName: \"kubernetes.io/projected/ab6c7759-7bcf-4efa-b50f-b73e87f20842-kube-api-access-fdkf8\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.251454 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.251465 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qxl6\" (UniqueName: \"kubernetes.io/projected/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b-kube-api-access-8qxl6\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.251474 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab6c7759-7bcf-4efa-b50f-b73e87f20842-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.536118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qs466" event={"ID":"fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b","Type":"ContainerDied","Data":"34f6e186da7eed2c50ce08151a27da7ce949b306e7a3053eaaf6661e92ac67d2"} Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.536153 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f6e186da7eed2c50ce08151a27da7ce949b306e7a3053eaaf6661e92ac67d2" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.536816 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qs466" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.555946 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-txbxn" Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.556846 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-txbxn" event={"ID":"ab6c7759-7bcf-4efa-b50f-b73e87f20842","Type":"ContainerDied","Data":"49dd8717b11d30e402d3967d7fb919fcbdb6837de27b30bc0d25f2ce23842765"} Feb 16 21:16:45 crc kubenswrapper[4805]: I0216 21:16:45.556958 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49dd8717b11d30e402d3967d7fb919fcbdb6837de27b30bc0d25f2ce23842765" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.323736 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6d48f79d95-n857j"] Feb 16 21:16:46 crc kubenswrapper[4805]: E0216 21:16:46.324501 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6c7759-7bcf-4efa-b50f-b73e87f20842" containerName="barbican-db-sync" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.324521 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6c7759-7bcf-4efa-b50f-b73e87f20842" containerName="barbican-db-sync" Feb 16 21:16:46 crc kubenswrapper[4805]: E0216 21:16:46.324572 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" containerName="heat-db-sync" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.324578 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" containerName="heat-db-sync" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.324870 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" containerName="heat-db-sync" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.324895 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6c7759-7bcf-4efa-b50f-b73e87f20842" containerName="barbican-db-sync" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.326114 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.343106 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.343258 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.343278 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6kmxj" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.350341 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d48f79d95-n857j"] Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.383934 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk"] Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.386461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.390677 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485297 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9503d6c3-cc2c-4a51-89c7-33339db1da77-config-data\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485360 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-combined-ca-bundle\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485416 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9503d6c3-cc2c-4a51-89c7-33339db1da77-config-data-custom\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485438 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgm7s\" (UniqueName: \"kubernetes.io/projected/9503d6c3-cc2c-4a51-89c7-33339db1da77-kube-api-access-dgm7s\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9503d6c3-cc2c-4a51-89c7-33339db1da77-logs\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485479 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-config-data-custom\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9503d6c3-cc2c-4a51-89c7-33339db1da77-combined-ca-bundle\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485522 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-config-data\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgk87\" (UniqueName: \"kubernetes.io/projected/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-kube-api-access-xgk87\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.485604 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-logs\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.493933 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk"] Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.522083 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z72sz"] Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.528934 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.560369 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z72sz"] Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.589025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9503d6c3-cc2c-4a51-89c7-33339db1da77-config-data-custom\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.590368 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgm7s\" (UniqueName: \"kubernetes.io/projected/9503d6c3-cc2c-4a51-89c7-33339db1da77-kube-api-access-dgm7s\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.593926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9503d6c3-cc2c-4a51-89c7-33339db1da77-logs\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-config-data-custom\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594203 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9503d6c3-cc2c-4a51-89c7-33339db1da77-combined-ca-bundle\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594237 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-config\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594297 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-config-data\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594353 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594446 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594487 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgk87\" (UniqueName: \"kubernetes.io/projected/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-kube-api-access-xgk87\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594575 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-logs\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594817 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd4hr\" (UniqueName: \"kubernetes.io/projected/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-kube-api-access-sd4hr\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594856 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9503d6c3-cc2c-4a51-89c7-33339db1da77-config-data\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594890 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.594929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-combined-ca-bundle\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.596247 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9503d6c3-cc2c-4a51-89c7-33339db1da77-logs\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.596656 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9503d6c3-cc2c-4a51-89c7-33339db1da77-config-data-custom\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.597367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-logs\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.600439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-combined-ca-bundle\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.602422 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9503d6c3-cc2c-4a51-89c7-33339db1da77-combined-ca-bundle\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.603512 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-config-data\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.611672 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgm7s\" (UniqueName: \"kubernetes.io/projected/9503d6c3-cc2c-4a51-89c7-33339db1da77-kube-api-access-dgm7s\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.612637 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9503d6c3-cc2c-4a51-89c7-33339db1da77-config-data\") pod \"barbican-worker-6d48f79d95-n857j\" (UID: \"9503d6c3-cc2c-4a51-89c7-33339db1da77\") " pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.619301 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b75669fb6-kkmnk"] Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.621381 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.628461 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgk87\" (UniqueName: \"kubernetes.io/projected/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-kube-api-access-xgk87\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.632266 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fa81bfa-8c27-4546-9c30-1c52781a7ecb-config-data-custom\") pod \"barbican-keystone-listener-7f5cbfc9c8-dwmdk\" (UID: \"6fa81bfa-8c27-4546-9c30-1c52781a7ecb\") " pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.638813 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b75669fb6-kkmnk"] Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.639053 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697402 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697445 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f401223c-dc9e-4cf2-b86e-9888a86f2a03-logs\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697482 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-combined-ca-bundle\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697509 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4pmw\" (UniqueName: \"kubernetes.io/projected/f401223c-dc9e-4cf2-b86e-9888a86f2a03-kube-api-access-z4pmw\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd4hr\" (UniqueName: \"kubernetes.io/projected/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-kube-api-access-sd4hr\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697548 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data-custom\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697569 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697644 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697665 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-config\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697702 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.697804 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.699157 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-config\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.699308 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.699314 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.699400 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.700132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.708300 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d48f79d95-n857j" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.723336 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd4hr\" (UniqueName: \"kubernetes.io/projected/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-kube-api-access-sd4hr\") pod \"dnsmasq-dns-848cf88cfc-z72sz\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.739669 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.800352 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.800435 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f401223c-dc9e-4cf2-b86e-9888a86f2a03-logs\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.800517 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-combined-ca-bundle\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.800591 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4pmw\" (UniqueName: \"kubernetes.io/projected/f401223c-dc9e-4cf2-b86e-9888a86f2a03-kube-api-access-z4pmw\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.800616 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data-custom\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.801206 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f401223c-dc9e-4cf2-b86e-9888a86f2a03-logs\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.803798 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.808825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data-custom\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.808833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-combined-ca-bundle\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.816600 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4pmw\" (UniqueName: \"kubernetes.io/projected/f401223c-dc9e-4cf2-b86e-9888a86f2a03-kube-api-access-z4pmw\") pod \"barbican-api-7b75669fb6-kkmnk\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:46 crc kubenswrapper[4805]: I0216 21:16:46.864307 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.077210 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.090308 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ms99" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.211802 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8125a07-0bfb-4381-80e2-bf5bb1525026-etc-machine-id\") pod \"c8125a07-0bfb-4381-80e2-bf5bb1525026\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.211911 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8125a07-0bfb-4381-80e2-bf5bb1525026-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c8125a07-0bfb-4381-80e2-bf5bb1525026" (UID: "c8125a07-0bfb-4381-80e2-bf5bb1525026"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.211920 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zxq9\" (UniqueName: \"kubernetes.io/projected/c8125a07-0bfb-4381-80e2-bf5bb1525026-kube-api-access-7zxq9\") pod \"c8125a07-0bfb-4381-80e2-bf5bb1525026\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.211971 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-db-sync-config-data\") pod \"c8125a07-0bfb-4381-80e2-bf5bb1525026\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.212030 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-scripts\") pod \"c8125a07-0bfb-4381-80e2-bf5bb1525026\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.212071 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-combined-ca-bundle\") pod \"c8125a07-0bfb-4381-80e2-bf5bb1525026\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.212126 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-config-data\") pod \"c8125a07-0bfb-4381-80e2-bf5bb1525026\" (UID: \"c8125a07-0bfb-4381-80e2-bf5bb1525026\") " Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.212846 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8125a07-0bfb-4381-80e2-bf5bb1525026-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.218336 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c8125a07-0bfb-4381-80e2-bf5bb1525026" (UID: "c8125a07-0bfb-4381-80e2-bf5bb1525026"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.218422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8125a07-0bfb-4381-80e2-bf5bb1525026-kube-api-access-7zxq9" (OuterVolumeSpecName: "kube-api-access-7zxq9") pod "c8125a07-0bfb-4381-80e2-bf5bb1525026" (UID: "c8125a07-0bfb-4381-80e2-bf5bb1525026"). InnerVolumeSpecName "kube-api-access-7zxq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.227230 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-scripts" (OuterVolumeSpecName: "scripts") pod "c8125a07-0bfb-4381-80e2-bf5bb1525026" (UID: "c8125a07-0bfb-4381-80e2-bf5bb1525026"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.257809 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8125a07-0bfb-4381-80e2-bf5bb1525026" (UID: "c8125a07-0bfb-4381-80e2-bf5bb1525026"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.317569 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zxq9\" (UniqueName: \"kubernetes.io/projected/c8125a07-0bfb-4381-80e2-bf5bb1525026-kube-api-access-7zxq9\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.317609 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.317624 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.317635 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.371970 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-config-data" (OuterVolumeSpecName: "config-data") pod "c8125a07-0bfb-4381-80e2-bf5bb1525026" (UID: "c8125a07-0bfb-4381-80e2-bf5bb1525026"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.421849 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8125a07-0bfb-4381-80e2-bf5bb1525026-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.617980 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="ceilometer-central-agent" containerID="cri-o://2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0" gracePeriod=30 Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.618609 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="ceilometer-notification-agent" containerID="cri-o://ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b" gracePeriod=30 Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.618639 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="proxy-httpd" containerID="cri-o://3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561" gracePeriod=30 Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.618679 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="sg-core" containerID="cri-o://8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685" gracePeriod=30 Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.637180 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ms99" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.657594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerStarted","Data":"3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561"} Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.657638 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ms99" event={"ID":"c8125a07-0bfb-4381-80e2-bf5bb1525026","Type":"ContainerDied","Data":"68d1ab1edf6967f6797f52fadbf4205ae99764541882d0167771c9a02976e1a0"} Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.657651 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68d1ab1edf6967f6797f52fadbf4205ae99764541882d0167771c9a02976e1a0" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.657665 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.667412 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.19180703 podStartE2EDuration="53.667391158s" podCreationTimestamp="2026-02-16 21:15:54 +0000 UTC" firstStartedPulling="2026-02-16 21:15:55.763712395 +0000 UTC m=+1173.582395690" lastFinishedPulling="2026-02-16 21:16:47.239296523 +0000 UTC m=+1225.057979818" observedRunningTime="2026-02-16 21:16:47.653088188 +0000 UTC m=+1225.471771483" watchObservedRunningTime="2026-02-16 21:16:47.667391158 +0000 UTC m=+1225.486074453" Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.752110 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d48f79d95-n857j"] Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.911078 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z72sz"] Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.947567 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b75669fb6-kkmnk"] Feb 16 21:16:47 crc kubenswrapper[4805]: I0216 21:16:47.959488 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk"] Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.510185 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:16:48 crc kubenswrapper[4805]: E0216 21:16:48.517131 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8125a07-0bfb-4381-80e2-bf5bb1525026" containerName="cinder-db-sync" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.517377 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8125a07-0bfb-4381-80e2-bf5bb1525026" containerName="cinder-db-sync" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.517757 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8125a07-0bfb-4381-80e2-bf5bb1525026" containerName="cinder-db-sync" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.519034 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.527327 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.528535 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.528617 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.529204 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-r9xdp" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.535789 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.584126 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.584198 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.584225 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b239aa-5174-4a48-a3d9-2df4d824e76b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.584248 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twk6q\" (UniqueName: \"kubernetes.io/projected/65b239aa-5174-4a48-a3d9-2df4d824e76b-kube-api-access-twk6q\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.584282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.584348 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-scripts\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.618470 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z72sz"] Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.649945 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxl5d"] Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.651755 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.678112 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxl5d"] Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.692535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-scripts\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.692660 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.692712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.692769 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b239aa-5174-4a48-a3d9-2df4d824e76b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.692792 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twk6q\" (UniqueName: \"kubernetes.io/projected/65b239aa-5174-4a48-a3d9-2df4d824e76b-kube-api-access-twk6q\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.692824 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.696151 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b239aa-5174-4a48-a3d9-2df4d824e76b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.699589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.710353 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.711705 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.716985 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-scripts\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.718109 4805 generic.go:334] "Generic (PLEG): container finished" podID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerID="8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685" exitCode=2 Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.718285 4805 generic.go:334] "Generic (PLEG): container finished" podID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerID="2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0" exitCode=0 Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.718472 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerDied","Data":"8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685"} Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.718673 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerDied","Data":"2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0"} Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.730537 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b75669fb6-kkmnk" event={"ID":"f401223c-dc9e-4cf2-b86e-9888a86f2a03","Type":"ContainerStarted","Data":"a50854e9179ad1f7e3a7a201999f56f82029dd5d0a4a66f258a481e618572a44"} Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.730582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b75669fb6-kkmnk" event={"ID":"f401223c-dc9e-4cf2-b86e-9888a86f2a03","Type":"ContainerStarted","Data":"8ff97d9900267491cdda3219cddc752753a02e59685d099f2e9196f942531419"} Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.748706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twk6q\" (UniqueName: \"kubernetes.io/projected/65b239aa-5174-4a48-a3d9-2df4d824e76b-kube-api-access-twk6q\") pod \"cinder-scheduler-0\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.778403 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.780512 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.780784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" event={"ID":"6fa81bfa-8c27-4546-9c30-1c52781a7ecb","Type":"ContainerStarted","Data":"e27fdb350410ff2c29cfff97bb06e05ba42a9ce1fcab98592de9f4df1d09ff6e"} Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.784373 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.795469 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.796903 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-config\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.796992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hj6\" (UniqueName: \"kubernetes.io/projected/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-kube-api-access-w4hj6\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.797073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-svc\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.797120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.797143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.797224 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.807500 4805 generic.go:334] "Generic (PLEG): container finished" podID="efd834b8-895f-4f77-ac50-9e9b42ac9bd4" containerID="804df028636036504694d3fa25d229293223c8e4d4c192565568123cacf6425b" exitCode=0 Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.807591 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" event={"ID":"efd834b8-895f-4f77-ac50-9e9b42ac9bd4","Type":"ContainerDied","Data":"804df028636036504694d3fa25d229293223c8e4d4c192565568123cacf6425b"} Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.807618 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" event={"ID":"efd834b8-895f-4f77-ac50-9e9b42ac9bd4","Type":"ContainerStarted","Data":"111242eb1a79852bc0aa433d4614ab50d026cb99f9e31852e8b1d529642da2d5"} Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.811197 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d48f79d95-n857j" event={"ID":"9503d6c3-cc2c-4a51-89c7-33339db1da77","Type":"ContainerStarted","Data":"7de592603a349efe68ec6b781ea07cf5a5ae47d7f88d9f980641b4479b36d8b7"} Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.859197 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hj6\" (UniqueName: \"kubernetes.io/projected/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-kube-api-access-w4hj6\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911567 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911642 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-svc\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911673 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djscf\" (UniqueName: \"kubernetes.io/projected/baa0398e-38fe-456c-8456-53c083f8e121-kube-api-access-djscf\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa0398e-38fe-456c-8456-53c083f8e121-logs\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911784 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data-custom\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911799 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-scripts\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911815 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911867 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911897 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-config\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.911966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baa0398e-38fe-456c-8456-53c083f8e121-etc-machine-id\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.913190 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-svc\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.913828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-config\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.914447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.914796 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.914929 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.956246 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hj6\" (UniqueName: \"kubernetes.io/projected/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-kube-api-access-w4hj6\") pod \"dnsmasq-dns-6578955fd5-pxl5d\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:48 crc kubenswrapper[4805]: I0216 21:16:48.973022 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.024354 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djscf\" (UniqueName: \"kubernetes.io/projected/baa0398e-38fe-456c-8456-53c083f8e121-kube-api-access-djscf\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.024399 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa0398e-38fe-456c-8456-53c083f8e121-logs\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.024442 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data-custom\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.024458 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-scripts\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.024504 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.025021 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa0398e-38fe-456c-8456-53c083f8e121-logs\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.034683 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baa0398e-38fe-456c-8456-53c083f8e121-etc-machine-id\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.034764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baa0398e-38fe-456c-8456-53c083f8e121-etc-machine-id\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.034855 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.041004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.041349 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-scripts\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.041386 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data-custom\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.044058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djscf\" (UniqueName: \"kubernetes.io/projected/baa0398e-38fe-456c-8456-53c083f8e121-kube-api-access-djscf\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.050185 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: E0216 21:16:49.250664 4805 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 16 21:16:49 crc kubenswrapper[4805]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/efd834b8-895f-4f77-ac50-9e9b42ac9bd4/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 21:16:49 crc kubenswrapper[4805]: > podSandboxID="111242eb1a79852bc0aa433d4614ab50d026cb99f9e31852e8b1d529642da2d5" Feb 16 21:16:49 crc kubenswrapper[4805]: E0216 21:16:49.250871 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 16 21:16:49 crc kubenswrapper[4805]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n99h8bhd9h696h649h588h5c6h658h5b4h57fh65h89h5f5h56h696h5dh8h57h597h68ch568h58dh66hf4h675h598h588h67dhb5h69h5dh6bq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sd4hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-848cf88cfc-z72sz_openstack(efd834b8-895f-4f77-ac50-9e9b42ac9bd4): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/efd834b8-895f-4f77-ac50-9e9b42ac9bd4/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 21:16:49 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 16 21:16:49 crc kubenswrapper[4805]: E0216 21:16:49.252088 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/efd834b8-895f-4f77-ac50-9e9b42ac9bd4/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" podUID="efd834b8-895f-4f77-ac50-9e9b42ac9bd4" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.291488 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.583134 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:16:49 crc kubenswrapper[4805]: W0216 21:16:49.742588 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c4a0ac_984b_4dd6_b70f_8c9ddbdf53d7.slice/crio-8c1a2e63d3fcdb8566f19a67eeb30429df620a721579ef632e0fe347189379fb WatchSource:0}: Error finding container 8c1a2e63d3fcdb8566f19a67eeb30429df620a721579ef632e0fe347189379fb: Status 404 returned error can't find the container with id 8c1a2e63d3fcdb8566f19a67eeb30429df620a721579ef632e0fe347189379fb Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.752452 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxl5d"] Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.868534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" event={"ID":"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7","Type":"ContainerStarted","Data":"8c1a2e63d3fcdb8566f19a67eeb30429df620a721579ef632e0fe347189379fb"} Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.896293 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65b239aa-5174-4a48-a3d9-2df4d824e76b","Type":"ContainerStarted","Data":"60e27743b363fe5f76669869008ce32e737fc40dcec5b54edd81f96e415788fe"} Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.914628 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b75669fb6-kkmnk" event={"ID":"f401223c-dc9e-4cf2-b86e-9888a86f2a03","Type":"ContainerStarted","Data":"1b9489d5ee02cba3db8dce72c2f23a796fb4f8a1fe44228d8f719142bf8df113"} Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.915109 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.915146 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:49 crc kubenswrapper[4805]: I0216 21:16:49.942257 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b75669fb6-kkmnk" podStartSLOduration=3.942238755 podStartE2EDuration="3.942238755s" podCreationTimestamp="2026-02-16 21:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:49.934226812 +0000 UTC m=+1227.752910107" watchObservedRunningTime="2026-02-16 21:16:49.942238755 +0000 UTC m=+1227.760922050" Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.007172 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.722151 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.897676 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-swift-storage-0\") pod \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.898032 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-config\") pod \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.898105 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-svc\") pod \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.898178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd4hr\" (UniqueName: \"kubernetes.io/projected/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-kube-api-access-sd4hr\") pod \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.898217 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-nb\") pod \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.898388 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-sb\") pod \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\" (UID: \"efd834b8-895f-4f77-ac50-9e9b42ac9bd4\") " Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.938382 4805 generic.go:334] "Generic (PLEG): container finished" podID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerID="ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b" exitCode=0 Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.938440 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerDied","Data":"ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b"} Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.940794 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.941605 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z72sz" event={"ID":"efd834b8-895f-4f77-ac50-9e9b42ac9bd4","Type":"ContainerDied","Data":"111242eb1a79852bc0aa433d4614ab50d026cb99f9e31852e8b1d529642da2d5"} Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.941629 4805 scope.go:117] "RemoveContainer" containerID="804df028636036504694d3fa25d229293223c8e4d4c192565568123cacf6425b" Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.943006 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-kube-api-access-sd4hr" (OuterVolumeSpecName: "kube-api-access-sd4hr") pod "efd834b8-895f-4f77-ac50-9e9b42ac9bd4" (UID: "efd834b8-895f-4f77-ac50-9e9b42ac9bd4"). InnerVolumeSpecName "kube-api-access-sd4hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.952858 4805 generic.go:334] "Generic (PLEG): container finished" podID="82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" containerID="45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd" exitCode=0 Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.952935 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" event={"ID":"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7","Type":"ContainerDied","Data":"45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd"} Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.958619 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baa0398e-38fe-456c-8456-53c083f8e121","Type":"ContainerStarted","Data":"a03894e9ac20d05abab769c09b75839aa9c9734261a0562f056257c6e863e766"} Feb 16 21:16:50 crc kubenswrapper[4805]: I0216 21:16:50.990044 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "efd834b8-895f-4f77-ac50-9e9b42ac9bd4" (UID: "efd834b8-895f-4f77-ac50-9e9b42ac9bd4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.007526 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd4hr\" (UniqueName: \"kubernetes.io/projected/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-kube-api-access-sd4hr\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.007557 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.038625 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "efd834b8-895f-4f77-ac50-9e9b42ac9bd4" (UID: "efd834b8-895f-4f77-ac50-9e9b42ac9bd4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.044514 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "efd834b8-895f-4f77-ac50-9e9b42ac9bd4" (UID: "efd834b8-895f-4f77-ac50-9e9b42ac9bd4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.054410 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "efd834b8-895f-4f77-ac50-9e9b42ac9bd4" (UID: "efd834b8-895f-4f77-ac50-9e9b42ac9bd4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.095741 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-config" (OuterVolumeSpecName: "config") pod "efd834b8-895f-4f77-ac50-9e9b42ac9bd4" (UID: "efd834b8-895f-4f77-ac50-9e9b42ac9bd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.109480 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.109512 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.109521 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.109530 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efd834b8-895f-4f77-ac50-9e9b42ac9bd4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.304021 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z72sz"] Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.314147 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z72sz"] Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.666328 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd834b8-895f-4f77-ac50-9e9b42ac9bd4" path="/var/lib/kubelet/pods/efd834b8-895f-4f77-ac50-9e9b42ac9bd4/volumes" Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.982112 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.992703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" event={"ID":"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7","Type":"ContainerStarted","Data":"9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a"} Feb 16 21:16:51 crc kubenswrapper[4805]: I0216 21:16:51.992882 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:52 crc kubenswrapper[4805]: I0216 21:16:52.011761 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d48f79d95-n857j" event={"ID":"9503d6c3-cc2c-4a51-89c7-33339db1da77","Type":"ContainerStarted","Data":"42d22467b3b1a5f4a09c8eb034d4fa1ae23047a300b58e9fdb890b90abbcc288"} Feb 16 21:16:52 crc kubenswrapper[4805]: I0216 21:16:52.019908 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" podStartSLOduration=4.019893865 podStartE2EDuration="4.019893865s" podCreationTimestamp="2026-02-16 21:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:52.011994105 +0000 UTC m=+1229.830677400" watchObservedRunningTime="2026-02-16 21:16:52.019893865 +0000 UTC m=+1229.838577160" Feb 16 21:16:52 crc kubenswrapper[4805]: I0216 21:16:52.020685 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" event={"ID":"6fa81bfa-8c27-4546-9c30-1c52781a7ecb","Type":"ContainerStarted","Data":"7361f20d1ce867b4495c974afd410fd61ee66ab16acc1ef2ae271272eb2fb1dd"} Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.044100 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" event={"ID":"6fa81bfa-8c27-4546-9c30-1c52781a7ecb","Type":"ContainerStarted","Data":"f82964cbcedc58a64cb3661739cb1f026030d3474d96015116844ead40523362"} Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.053438 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baa0398e-38fe-456c-8456-53c083f8e121","Type":"ContainerStarted","Data":"b377e27b34bd0e6004ba931a6b5c3a9286cad9521a4c3fb7bf2a2522c386ae23"} Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.059062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d48f79d95-n857j" event={"ID":"9503d6c3-cc2c-4a51-89c7-33339db1da77","Type":"ContainerStarted","Data":"aa3bb305b34e13eec314176a7900d350db3eb08e371b24df0f9e0abb299c8d72"} Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.061522 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65b239aa-5174-4a48-a3d9-2df4d824e76b","Type":"ContainerStarted","Data":"78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83"} Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.075968 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7f5cbfc9c8-dwmdk" podStartSLOduration=3.721270574 podStartE2EDuration="7.075945487s" podCreationTimestamp="2026-02-16 21:16:46 +0000 UTC" firstStartedPulling="2026-02-16 21:16:47.946990115 +0000 UTC m=+1225.765673410" lastFinishedPulling="2026-02-16 21:16:51.301665038 +0000 UTC m=+1229.120348323" observedRunningTime="2026-02-16 21:16:53.065474764 +0000 UTC m=+1230.884158069" watchObservedRunningTime="2026-02-16 21:16:53.075945487 +0000 UTC m=+1230.894628792" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.093550 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6d48f79d95-n857j" podStartSLOduration=3.543578961 podStartE2EDuration="7.093532868s" podCreationTimestamp="2026-02-16 21:16:46 +0000 UTC" firstStartedPulling="2026-02-16 21:16:47.750889829 +0000 UTC m=+1225.569573124" lastFinishedPulling="2026-02-16 21:16:51.300843736 +0000 UTC m=+1229.119527031" observedRunningTime="2026-02-16 21:16:53.088397784 +0000 UTC m=+1230.907081079" watchObservedRunningTime="2026-02-16 21:16:53.093532868 +0000 UTC m=+1230.912216163" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.666745 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5958664456-5mzsf"] Feb 16 21:16:53 crc kubenswrapper[4805]: E0216 21:16:53.677353 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd834b8-895f-4f77-ac50-9e9b42ac9bd4" containerName="init" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.677391 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd834b8-895f-4f77-ac50-9e9b42ac9bd4" containerName="init" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.678086 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd834b8-895f-4f77-ac50-9e9b42ac9bd4" containerName="init" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.680226 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5958664456-5mzsf"] Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.680339 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.685486 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.685921 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.703397 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mss9l\" (UniqueName: \"kubernetes.io/projected/e742c4b3-4b27-4dd3-bbf7-8a005f496802-kube-api-access-mss9l\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.703450 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-public-tls-certs\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.703695 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-config-data-custom\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.703842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-combined-ca-bundle\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.703921 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-config-data\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.704046 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e742c4b3-4b27-4dd3-bbf7-8a005f496802-logs\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.704286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-internal-tls-certs\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.725297 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.807658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-config-data\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.807766 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e742c4b3-4b27-4dd3-bbf7-8a005f496802-logs\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.807816 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-internal-tls-certs\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.807876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mss9l\" (UniqueName: \"kubernetes.io/projected/e742c4b3-4b27-4dd3-bbf7-8a005f496802-kube-api-access-mss9l\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.807901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-public-tls-certs\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.807982 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-config-data-custom\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.808043 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-combined-ca-bundle\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.811980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e742c4b3-4b27-4dd3-bbf7-8a005f496802-logs\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.817312 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-config-data-custom\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.820190 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-combined-ca-bundle\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.823219 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-public-tls-certs\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.827336 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-internal-tls-certs\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.830699 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e742c4b3-4b27-4dd3-bbf7-8a005f496802-config-data\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:53 crc kubenswrapper[4805]: I0216 21:16:53.837114 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mss9l\" (UniqueName: \"kubernetes.io/projected/e742c4b3-4b27-4dd3-bbf7-8a005f496802-kube-api-access-mss9l\") pod \"barbican-api-5958664456-5mzsf\" (UID: \"e742c4b3-4b27-4dd3-bbf7-8a005f496802\") " pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.019819 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.066819 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65bbdd7745-qtmzr"] Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.067123 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65bbdd7745-qtmzr" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerName="neutron-api" containerID="cri-o://dc891749cb57102c67cc7d93a34901a0e19601c31b5532b5a55179aae3c41186" gracePeriod=30 Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.067292 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65bbdd7745-qtmzr" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerName="neutron-httpd" containerID="cri-o://f1067e7a3ce96adf6307131efa852971e92c89e36ce22c0b1f103b0aa0ef5941" gracePeriod=30 Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.086616 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65b239aa-5174-4a48-a3d9-2df4d824e76b","Type":"ContainerStarted","Data":"985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054"} Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.103948 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="baa0398e-38fe-456c-8456-53c083f8e121" containerName="cinder-api-log" containerID="cri-o://b377e27b34bd0e6004ba931a6b5c3a9286cad9521a4c3fb7bf2a2522c386ae23" gracePeriod=30 Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.104250 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baa0398e-38fe-456c-8456-53c083f8e121","Type":"ContainerStarted","Data":"aa197270c0589e86022a95f6f1f990279686b5f941c4aef6c69ffde642ad2338"} Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.104794 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.104828 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="baa0398e-38fe-456c-8456-53c083f8e121" containerName="cinder-api" containerID="cri-o://aa197270c0589e86022a95f6f1f990279686b5f941c4aef6c69ffde642ad2338" gracePeriod=30 Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.127255 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.132062 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8fbb985b9-2x2rd"] Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.134105 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.174113 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8fbb985b9-2x2rd"] Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.175693 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.313612019 podStartE2EDuration="6.175676418s" podCreationTimestamp="2026-02-16 21:16:48 +0000 UTC" firstStartedPulling="2026-02-16 21:16:49.63157585 +0000 UTC m=+1227.450259145" lastFinishedPulling="2026-02-16 21:16:51.493640249 +0000 UTC m=+1229.312323544" observedRunningTime="2026-02-16 21:16:54.134342533 +0000 UTC m=+1231.953025828" watchObservedRunningTime="2026-02-16 21:16:54.175676418 +0000 UTC m=+1231.994359713" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.312089 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.312065716 podStartE2EDuration="6.312065716s" podCreationTimestamp="2026-02-16 21:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:54.2441571 +0000 UTC m=+1232.062840395" watchObservedRunningTime="2026-02-16 21:16:54.312065716 +0000 UTC m=+1232.130749011" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.325378 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-config\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.325489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-public-tls-certs\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.325507 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-internal-tls-certs\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.325530 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqzd6\" (UniqueName: \"kubernetes.io/projected/37233461-85b7-4069-885f-b5a1ac819473-kube-api-access-bqzd6\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.325548 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-combined-ca-bundle\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.325604 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-ovndb-tls-certs\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.325652 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-httpd-config\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.428660 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-ovndb-tls-certs\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.429106 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-httpd-config\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.429175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-config\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.429268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-public-tls-certs\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.429295 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-internal-tls-certs\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.429316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqzd6\" (UniqueName: \"kubernetes.io/projected/37233461-85b7-4069-885f-b5a1ac819473-kube-api-access-bqzd6\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.429333 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-combined-ca-bundle\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.436209 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-config\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.442875 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-httpd-config\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.443709 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-internal-tls-certs\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.446782 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-combined-ca-bundle\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.462631 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-public-tls-certs\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.462868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37233461-85b7-4069-885f-b5a1ac819473-ovndb-tls-certs\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.469459 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqzd6\" (UniqueName: \"kubernetes.io/projected/37233461-85b7-4069-885f-b5a1ac819473-kube-api-access-bqzd6\") pod \"neutron-8fbb985b9-2x2rd\" (UID: \"37233461-85b7-4069-885f-b5a1ac819473\") " pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.602899 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:54 crc kubenswrapper[4805]: W0216 21:16:54.666893 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode742c4b3_4b27_4dd3_bbf7_8a005f496802.slice/crio-b0365d2e32f4d4041a8fa0dce1e69378524a7f485df95cf4b633c689a0a94d74 WatchSource:0}: Error finding container b0365d2e32f4d4041a8fa0dce1e69378524a7f485df95cf4b633c689a0a94d74: Status 404 returned error can't find the container with id b0365d2e32f4d4041a8fa0dce1e69378524a7f485df95cf4b633c689a0a94d74 Feb 16 21:16:54 crc kubenswrapper[4805]: I0216 21:16:54.670455 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5958664456-5mzsf"] Feb 16 21:16:55 crc kubenswrapper[4805]: I0216 21:16:55.133132 4805 generic.go:334] "Generic (PLEG): container finished" podID="baa0398e-38fe-456c-8456-53c083f8e121" containerID="b377e27b34bd0e6004ba931a6b5c3a9286cad9521a4c3fb7bf2a2522c386ae23" exitCode=143 Feb 16 21:16:55 crc kubenswrapper[4805]: I0216 21:16:55.133458 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baa0398e-38fe-456c-8456-53c083f8e121","Type":"ContainerDied","Data":"b377e27b34bd0e6004ba931a6b5c3a9286cad9521a4c3fb7bf2a2522c386ae23"} Feb 16 21:16:55 crc kubenswrapper[4805]: I0216 21:16:55.137745 4805 generic.go:334] "Generic (PLEG): container finished" podID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerID="f1067e7a3ce96adf6307131efa852971e92c89e36ce22c0b1f103b0aa0ef5941" exitCode=0 Feb 16 21:16:55 crc kubenswrapper[4805]: I0216 21:16:55.137808 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65bbdd7745-qtmzr" event={"ID":"3d9e7980-c791-44b2-a527-948c3b3b14e3","Type":"ContainerDied","Data":"f1067e7a3ce96adf6307131efa852971e92c89e36ce22c0b1f103b0aa0ef5941"} Feb 16 21:16:55 crc kubenswrapper[4805]: I0216 21:16:55.139578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5958664456-5mzsf" event={"ID":"e742c4b3-4b27-4dd3-bbf7-8a005f496802","Type":"ContainerStarted","Data":"b0365d2e32f4d4041a8fa0dce1e69378524a7f485df95cf4b633c689a0a94d74"} Feb 16 21:16:55 crc kubenswrapper[4805]: I0216 21:16:55.390941 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8fbb985b9-2x2rd"] Feb 16 21:16:55 crc kubenswrapper[4805]: W0216 21:16:55.391388 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37233461_85b7_4069_885f_b5a1ac819473.slice/crio-6a0ae47e801b37c3c6e26275d27111b6a43ffe940329241bf291eb0802c2b3c1 WatchSource:0}: Error finding container 6a0ae47e801b37c3c6e26275d27111b6a43ffe940329241bf291eb0802c2b3c1: Status 404 returned error can't find the container with id 6a0ae47e801b37c3c6e26275d27111b6a43ffe940329241bf291eb0802c2b3c1 Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.174941 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fbb985b9-2x2rd" event={"ID":"37233461-85b7-4069-885f-b5a1ac819473","Type":"ContainerStarted","Data":"b77d1d9d314c08babe40fa1baf0362007cf9dbf897a8e18325c9703dfe2e750c"} Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.175482 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.175498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fbb985b9-2x2rd" event={"ID":"37233461-85b7-4069-885f-b5a1ac819473","Type":"ContainerStarted","Data":"935e8400007d534a84bf21cc331d3b752a5d60bfec69e679502ea33f40875e8a"} Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.175508 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8fbb985b9-2x2rd" event={"ID":"37233461-85b7-4069-885f-b5a1ac819473","Type":"ContainerStarted","Data":"6a0ae47e801b37c3c6e26275d27111b6a43ffe940329241bf291eb0802c2b3c1"} Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.178455 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5958664456-5mzsf" event={"ID":"e742c4b3-4b27-4dd3-bbf7-8a005f496802","Type":"ContainerStarted","Data":"7427a6e6cabbae97761a5b6cde05764d556aa72205e71d5333cf79c37571c347"} Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.178490 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5958664456-5mzsf" event={"ID":"e742c4b3-4b27-4dd3-bbf7-8a005f496802","Type":"ContainerStarted","Data":"46089e2dace9959179787096fb308a16d0d93e1f3d6670d36274f08da3c4a17b"} Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.178681 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.192409 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8fbb985b9-2x2rd" podStartSLOduration=2.192392166 podStartE2EDuration="2.192392166s" podCreationTimestamp="2026-02-16 21:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:56.191000267 +0000 UTC m=+1234.009683562" watchObservedRunningTime="2026-02-16 21:16:56.192392166 +0000 UTC m=+1234.011075461" Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.212361 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5958664456-5mzsf" podStartSLOduration=3.212344184 podStartE2EDuration="3.212344184s" podCreationTimestamp="2026-02-16 21:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:56.21042337 +0000 UTC m=+1234.029106665" watchObservedRunningTime="2026-02-16 21:16:56.212344184 +0000 UTC m=+1234.031027479" Feb 16 21:16:56 crc kubenswrapper[4805]: I0216 21:16:56.319753 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-65bbdd7745-qtmzr" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.194:9696/\": dial tcp 10.217.0.194:9696: connect: connection refused" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.209976 4805 generic.go:334] "Generic (PLEG): container finished" podID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerID="dc891749cb57102c67cc7d93a34901a0e19601c31b5532b5a55179aae3c41186" exitCode=0 Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.211337 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65bbdd7745-qtmzr" event={"ID":"3d9e7980-c791-44b2-a527-948c3b3b14e3","Type":"ContainerDied","Data":"dc891749cb57102c67cc7d93a34901a0e19601c31b5532b5a55179aae3c41186"} Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.211398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65bbdd7745-qtmzr" event={"ID":"3d9e7980-c791-44b2-a527-948c3b3b14e3","Type":"ContainerDied","Data":"5ef474dcd3096582c92aa1e43f9a5491f93d41b1ff53d65454939859aaea1749"} Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.211411 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ef474dcd3096582c92aa1e43f9a5491f93d41b1ff53d65454939859aaea1749" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.211596 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.304653 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.401868 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-combined-ca-bundle\") pod \"3d9e7980-c791-44b2-a527-948c3b3b14e3\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.402029 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-public-tls-certs\") pod \"3d9e7980-c791-44b2-a527-948c3b3b14e3\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.402268 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-config\") pod \"3d9e7980-c791-44b2-a527-948c3b3b14e3\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.402970 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-ovndb-tls-certs\") pod \"3d9e7980-c791-44b2-a527-948c3b3b14e3\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.403035 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-httpd-config\") pod \"3d9e7980-c791-44b2-a527-948c3b3b14e3\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.403106 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mjb2\" (UniqueName: \"kubernetes.io/projected/3d9e7980-c791-44b2-a527-948c3b3b14e3-kube-api-access-5mjb2\") pod \"3d9e7980-c791-44b2-a527-948c3b3b14e3\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.403572 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-internal-tls-certs\") pod \"3d9e7980-c791-44b2-a527-948c3b3b14e3\" (UID: \"3d9e7980-c791-44b2-a527-948c3b3b14e3\") " Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.420460 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3d9e7980-c791-44b2-a527-948c3b3b14e3" (UID: "3d9e7980-c791-44b2-a527-948c3b3b14e3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.425368 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9e7980-c791-44b2-a527-948c3b3b14e3-kube-api-access-5mjb2" (OuterVolumeSpecName: "kube-api-access-5mjb2") pod "3d9e7980-c791-44b2-a527-948c3b3b14e3" (UID: "3d9e7980-c791-44b2-a527-948c3b3b14e3"). InnerVolumeSpecName "kube-api-access-5mjb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.497682 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-config" (OuterVolumeSpecName: "config") pod "3d9e7980-c791-44b2-a527-948c3b3b14e3" (UID: "3d9e7980-c791-44b2-a527-948c3b3b14e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.509477 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.509511 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.509523 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mjb2\" (UniqueName: \"kubernetes.io/projected/3d9e7980-c791-44b2-a527-948c3b3b14e3-kube-api-access-5mjb2\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.535813 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d9e7980-c791-44b2-a527-948c3b3b14e3" (UID: "3d9e7980-c791-44b2-a527-948c3b3b14e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.535880 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3d9e7980-c791-44b2-a527-948c3b3b14e3" (UID: "3d9e7980-c791-44b2-a527-948c3b3b14e3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.558478 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3d9e7980-c791-44b2-a527-948c3b3b14e3" (UID: "3d9e7980-c791-44b2-a527-948c3b3b14e3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.572877 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3d9e7980-c791-44b2-a527-948c3b3b14e3" (UID: "3d9e7980-c791-44b2-a527-948c3b3b14e3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.611083 4805 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.611123 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.611134 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:57 crc kubenswrapper[4805]: I0216 21:16:57.611144 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d9e7980-c791-44b2-a527-948c3b3b14e3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:16:58 crc kubenswrapper[4805]: I0216 21:16:58.222017 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65bbdd7745-qtmzr" Feb 16 21:16:58 crc kubenswrapper[4805]: I0216 21:16:58.275278 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65bbdd7745-qtmzr"] Feb 16 21:16:58 crc kubenswrapper[4805]: I0216 21:16:58.287814 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-65bbdd7745-qtmzr"] Feb 16 21:16:58 crc kubenswrapper[4805]: I0216 21:16:58.537035 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:58 crc kubenswrapper[4805]: I0216 21:16:58.701551 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:16:58 crc kubenswrapper[4805]: I0216 21:16:58.860608 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 21:16:58 crc kubenswrapper[4805]: I0216 21:16:58.974902 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.090435 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rmrmq"] Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.090683 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" podUID="6b70cf49-b5fd-4814-87ef-e22b1b820066" containerName="dnsmasq-dns" containerID="cri-o://1c4bd2bfff27d7f68c16a3274f8a0e3e1257d7d1ac0bc4740feb8c84a0f739df" gracePeriod=10 Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.197229 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.265901 4805 generic.go:334] "Generic (PLEG): container finished" podID="6b70cf49-b5fd-4814-87ef-e22b1b820066" containerID="1c4bd2bfff27d7f68c16a3274f8a0e3e1257d7d1ac0bc4740feb8c84a0f739df" exitCode=0 Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.265951 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" event={"ID":"6b70cf49-b5fd-4814-87ef-e22b1b820066","Type":"ContainerDied","Data":"1c4bd2bfff27d7f68c16a3274f8a0e3e1257d7d1ac0bc4740feb8c84a0f739df"} Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.311450 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.620882 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" path="/var/lib/kubelet/pods/3d9e7980-c791-44b2-a527-948c3b3b14e3/volumes" Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.743698 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.907217 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-config\") pod \"6b70cf49-b5fd-4814-87ef-e22b1b820066\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.907298 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-sb\") pod \"6b70cf49-b5fd-4814-87ef-e22b1b820066\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.907366 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-nb\") pod \"6b70cf49-b5fd-4814-87ef-e22b1b820066\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.907473 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-swift-storage-0\") pod \"6b70cf49-b5fd-4814-87ef-e22b1b820066\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.907614 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nsxg\" (UniqueName: \"kubernetes.io/projected/6b70cf49-b5fd-4814-87ef-e22b1b820066-kube-api-access-9nsxg\") pod \"6b70cf49-b5fd-4814-87ef-e22b1b820066\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.907667 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-svc\") pod \"6b70cf49-b5fd-4814-87ef-e22b1b820066\" (UID: \"6b70cf49-b5fd-4814-87ef-e22b1b820066\") " Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.914386 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b70cf49-b5fd-4814-87ef-e22b1b820066-kube-api-access-9nsxg" (OuterVolumeSpecName: "kube-api-access-9nsxg") pod "6b70cf49-b5fd-4814-87ef-e22b1b820066" (UID: "6b70cf49-b5fd-4814-87ef-e22b1b820066"). InnerVolumeSpecName "kube-api-access-9nsxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:59 crc kubenswrapper[4805]: I0216 21:16:59.989824 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-config" (OuterVolumeSpecName: "config") pod "6b70cf49-b5fd-4814-87ef-e22b1b820066" (UID: "6b70cf49-b5fd-4814-87ef-e22b1b820066"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.010850 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nsxg\" (UniqueName: \"kubernetes.io/projected/6b70cf49-b5fd-4814-87ef-e22b1b820066-kube-api-access-9nsxg\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.010879 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.012281 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6b70cf49-b5fd-4814-87ef-e22b1b820066" (UID: "6b70cf49-b5fd-4814-87ef-e22b1b820066"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.016422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b70cf49-b5fd-4814-87ef-e22b1b820066" (UID: "6b70cf49-b5fd-4814-87ef-e22b1b820066"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.022943 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6b70cf49-b5fd-4814-87ef-e22b1b820066" (UID: "6b70cf49-b5fd-4814-87ef-e22b1b820066"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.031901 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b70cf49-b5fd-4814-87ef-e22b1b820066" (UID: "6b70cf49-b5fd-4814-87ef-e22b1b820066"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.113424 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.113453 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.113463 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.113473 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b70cf49-b5fd-4814-87ef-e22b1b820066-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.279237 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerName="cinder-scheduler" containerID="cri-o://78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83" gracePeriod=30 Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.279504 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.279526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-rmrmq" event={"ID":"6b70cf49-b5fd-4814-87ef-e22b1b820066","Type":"ContainerDied","Data":"6ad5bc3bf575b5b2e331577fe197dda1ae6ecb3e20d65e5fe9da49137118b05d"} Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.282848 4805 scope.go:117] "RemoveContainer" containerID="1c4bd2bfff27d7f68c16a3274f8a0e3e1257d7d1ac0bc4740feb8c84a0f739df" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.279940 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerName="probe" containerID="cri-o://985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054" gracePeriod=30 Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.316781 4805 scope.go:117] "RemoveContainer" containerID="88564d5a9da1fe901b4cd43a529064a813fe7dd25cd0f577a5c3eb4fd09887f2" Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.348586 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rmrmq"] Feb 16 21:17:00 crc kubenswrapper[4805]: I0216 21:17:00.357713 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-rmrmq"] Feb 16 21:17:01 crc kubenswrapper[4805]: I0216 21:17:01.296701 4805 generic.go:334] "Generic (PLEG): container finished" podID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerID="985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054" exitCode=0 Feb 16 21:17:01 crc kubenswrapper[4805]: I0216 21:17:01.297055 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65b239aa-5174-4a48-a3d9-2df4d824e76b","Type":"ContainerDied","Data":"985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054"} Feb 16 21:17:01 crc kubenswrapper[4805]: I0216 21:17:01.613075 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b70cf49-b5fd-4814-87ef-e22b1b820066" path="/var/lib/kubelet/pods/6b70cf49-b5fd-4814-87ef-e22b1b820066/volumes" Feb 16 21:17:02 crc kubenswrapper[4805]: I0216 21:17:02.441231 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.001685 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.021915 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-scripts\") pod \"65b239aa-5174-4a48-a3d9-2df4d824e76b\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.022003 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data-custom\") pod \"65b239aa-5174-4a48-a3d9-2df4d824e76b\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.022121 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data\") pod \"65b239aa-5174-4a48-a3d9-2df4d824e76b\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.022160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b239aa-5174-4a48-a3d9-2df4d824e76b-etc-machine-id\") pod \"65b239aa-5174-4a48-a3d9-2df4d824e76b\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.022189 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twk6q\" (UniqueName: \"kubernetes.io/projected/65b239aa-5174-4a48-a3d9-2df4d824e76b-kube-api-access-twk6q\") pod \"65b239aa-5174-4a48-a3d9-2df4d824e76b\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.022220 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-combined-ca-bundle\") pod \"65b239aa-5174-4a48-a3d9-2df4d824e76b\" (UID: \"65b239aa-5174-4a48-a3d9-2df4d824e76b\") " Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.022948 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65b239aa-5174-4a48-a3d9-2df4d824e76b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "65b239aa-5174-4a48-a3d9-2df4d824e76b" (UID: "65b239aa-5174-4a48-a3d9-2df4d824e76b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.030015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b239aa-5174-4a48-a3d9-2df4d824e76b-kube-api-access-twk6q" (OuterVolumeSpecName: "kube-api-access-twk6q") pod "65b239aa-5174-4a48-a3d9-2df4d824e76b" (UID: "65b239aa-5174-4a48-a3d9-2df4d824e76b"). InnerVolumeSpecName "kube-api-access-twk6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.030115 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "65b239aa-5174-4a48-a3d9-2df4d824e76b" (UID: "65b239aa-5174-4a48-a3d9-2df4d824e76b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.051291 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-scripts" (OuterVolumeSpecName: "scripts") pod "65b239aa-5174-4a48-a3d9-2df4d824e76b" (UID: "65b239aa-5174-4a48-a3d9-2df4d824e76b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.125172 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b239aa-5174-4a48-a3d9-2df4d824e76b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.125219 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twk6q\" (UniqueName: \"kubernetes.io/projected/65b239aa-5174-4a48-a3d9-2df4d824e76b-kube-api-access-twk6q\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.125234 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.125246 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.196502 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65b239aa-5174-4a48-a3d9-2df4d824e76b" (UID: "65b239aa-5174-4a48-a3d9-2df4d824e76b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.227265 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.257650 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data" (OuterVolumeSpecName: "config-data") pod "65b239aa-5174-4a48-a3d9-2df4d824e76b" (UID: "65b239aa-5174-4a48-a3d9-2df4d824e76b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.332776 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b239aa-5174-4a48-a3d9-2df4d824e76b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.355043 4805 generic.go:334] "Generic (PLEG): container finished" podID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerID="78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83" exitCode=0 Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.355095 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65b239aa-5174-4a48-a3d9-2df4d824e76b","Type":"ContainerDied","Data":"78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83"} Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.355121 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65b239aa-5174-4a48-a3d9-2df4d824e76b","Type":"ContainerDied","Data":"60e27743b363fe5f76669869008ce32e737fc40dcec5b54edd81f96e415788fe"} Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.355136 4805 scope.go:117] "RemoveContainer" containerID="985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.355294 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.401067 4805 scope.go:117] "RemoveContainer" containerID="78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.421589 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.432829 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.454800 4805 scope.go:117] "RemoveContainer" containerID="985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054" Feb 16 21:17:03 crc kubenswrapper[4805]: E0216 21:17:03.455158 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054\": container with ID starting with 985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054 not found: ID does not exist" containerID="985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.455201 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054"} err="failed to get container status \"985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054\": rpc error: code = NotFound desc = could not find container \"985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054\": container with ID starting with 985f7494bc2614d77921e0f9c79d92cb7c2eba5eba7bdb60f870d9618cf07054 not found: ID does not exist" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.455228 4805 scope.go:117] "RemoveContainer" containerID="78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83" Feb 16 21:17:03 crc kubenswrapper[4805]: E0216 21:17:03.455814 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83\": container with ID starting with 78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83 not found: ID does not exist" containerID="78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.455850 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83"} err="failed to get container status \"78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83\": rpc error: code = NotFound desc = could not find container \"78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83\": container with ID starting with 78a618ec5b2fff7389e532893ef9b9a802dc1b1d50d4a749aa6b1220a379fa83 not found: ID does not exist" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.459763 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:17:03 crc kubenswrapper[4805]: E0216 21:17:03.460219 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerName="probe" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460237 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerName="probe" Feb 16 21:17:03 crc kubenswrapper[4805]: E0216 21:17:03.460261 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerName="neutron-httpd" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460268 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerName="neutron-httpd" Feb 16 21:17:03 crc kubenswrapper[4805]: E0216 21:17:03.460281 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerName="neutron-api" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460287 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerName="neutron-api" Feb 16 21:17:03 crc kubenswrapper[4805]: E0216 21:17:03.460316 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b70cf49-b5fd-4814-87ef-e22b1b820066" containerName="dnsmasq-dns" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460322 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b70cf49-b5fd-4814-87ef-e22b1b820066" containerName="dnsmasq-dns" Feb 16 21:17:03 crc kubenswrapper[4805]: E0216 21:17:03.460344 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b70cf49-b5fd-4814-87ef-e22b1b820066" containerName="init" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460351 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b70cf49-b5fd-4814-87ef-e22b1b820066" containerName="init" Feb 16 21:17:03 crc kubenswrapper[4805]: E0216 21:17:03.460362 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerName="cinder-scheduler" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460369 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerName="cinder-scheduler" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460572 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerName="probe" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460594 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerName="neutron-httpd" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460607 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9e7980-c791-44b2-a527-948c3b3b14e3" containerName="neutron-api" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460615 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b70cf49-b5fd-4814-87ef-e22b1b820066" containerName="dnsmasq-dns" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.460623 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b239aa-5174-4a48-a3d9-2df4d824e76b" containerName="cinder-scheduler" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.466145 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.470524 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.474836 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.521569 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.524633 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.621116 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b239aa-5174-4a48-a3d9-2df4d824e76b" path="/var/lib/kubelet/pods/65b239aa-5174-4a48-a3d9-2df4d824e76b/volumes" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.640675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.640741 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23560f03-f6f6-48a5-9d10-b797e3d8042e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.641007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5svw8\" (UniqueName: \"kubernetes.io/projected/23560f03-f6f6-48a5-9d10-b797e3d8042e-kube-api-access-5svw8\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.641116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-config-data\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.641178 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.641229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-scripts\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.744199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5svw8\" (UniqueName: \"kubernetes.io/projected/23560f03-f6f6-48a5-9d10-b797e3d8042e-kube-api-access-5svw8\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.744293 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-config-data\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.744351 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.744405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-scripts\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.744489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.744516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23560f03-f6f6-48a5-9d10-b797e3d8042e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.744567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23560f03-f6f6-48a5-9d10-b797e3d8042e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.751226 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-config-data\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.752077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-scripts\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.770281 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.770786 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23560f03-f6f6-48a5-9d10-b797e3d8042e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.775519 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5svw8\" (UniqueName: \"kubernetes.io/projected/23560f03-f6f6-48a5-9d10-b797e3d8042e-kube-api-access-5svw8\") pod \"cinder-scheduler-0\" (UID: \"23560f03-f6f6-48a5-9d10-b797e3d8042e\") " pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.786207 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.887820 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f8cfbb668-2nz5c"] Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.903885 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.912461 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f8cfbb668-2nz5c"] Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.969968 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1044e6f4-8331-45b2-b130-aee982e7c595-logs\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.970182 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-public-tls-certs\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.970204 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-internal-tls-certs\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.970374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7l8m\" (UniqueName: \"kubernetes.io/projected/1044e6f4-8331-45b2-b130-aee982e7c595-kube-api-access-k7l8m\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.970446 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-scripts\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.970568 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-config-data\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:03 crc kubenswrapper[4805]: I0216 21:17:03.970629 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-combined-ca-bundle\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.072339 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-public-tls-certs\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.072387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-internal-tls-certs\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.072411 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7l8m\" (UniqueName: \"kubernetes.io/projected/1044e6f4-8331-45b2-b130-aee982e7c595-kube-api-access-k7l8m\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.072476 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-scripts\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.072710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-config-data\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.072757 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-combined-ca-bundle\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.073395 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1044e6f4-8331-45b2-b130-aee982e7c595-logs\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.073936 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1044e6f4-8331-45b2-b130-aee982e7c595-logs\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.079747 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-public-tls-certs\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.080714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-config-data\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.093476 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-scripts\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.093839 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-internal-tls-certs\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.094190 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1044e6f4-8331-45b2-b130-aee982e7c595-combined-ca-bundle\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.097017 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7l8m\" (UniqueName: \"kubernetes.io/projected/1044e6f4-8331-45b2-b130-aee982e7c595-kube-api-access-k7l8m\") pod \"placement-7f8cfbb668-2nz5c\" (UID: \"1044e6f4-8331-45b2-b130-aee982e7c595\") " pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.290378 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.579354 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:17:04 crc kubenswrapper[4805]: W0216 21:17:04.639892 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23560f03_f6f6_48a5_9d10_b797e3d8042e.slice/crio-fd17f5517cd000d284855963eee743500df317541f7187c730a3397b02868eff WatchSource:0}: Error finding container fd17f5517cd000d284855963eee743500df317541f7187c730a3397b02868eff: Status 404 returned error can't find the container with id fd17f5517cd000d284855963eee743500df317541f7187c730a3397b02868eff Feb 16 21:17:04 crc kubenswrapper[4805]: I0216 21:17:04.929047 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f8cfbb668-2nz5c"] Feb 16 21:17:04 crc kubenswrapper[4805]: W0216 21:17:04.940025 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1044e6f4_8331_45b2_b130_aee982e7c595.slice/crio-6e3e954677838324a209772f1fed1573d2d783fd287a76c06ab8c51078949480 WatchSource:0}: Error finding container 6e3e954677838324a209772f1fed1573d2d783fd287a76c06ab8c51078949480: Status 404 returned error can't find the container with id 6e3e954677838324a209772f1fed1573d2d783fd287a76c06ab8c51078949480 Feb 16 21:17:05 crc kubenswrapper[4805]: I0216 21:17:05.421292 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23560f03-f6f6-48a5-9d10-b797e3d8042e","Type":"ContainerStarted","Data":"a060e7cafbc7c48fed20de154219f983ef57baa989c88ffab5a601dbe2ef2768"} Feb 16 21:17:05 crc kubenswrapper[4805]: I0216 21:17:05.421798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23560f03-f6f6-48a5-9d10-b797e3d8042e","Type":"ContainerStarted","Data":"fd17f5517cd000d284855963eee743500df317541f7187c730a3397b02868eff"} Feb 16 21:17:05 crc kubenswrapper[4805]: I0216 21:17:05.424019 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f8cfbb668-2nz5c" event={"ID":"1044e6f4-8331-45b2-b130-aee982e7c595","Type":"ContainerStarted","Data":"623c365257e3d9f12568706bd072820ca14a6bc8fe9abdd59f8be51485f29e11"} Feb 16 21:17:05 crc kubenswrapper[4805]: I0216 21:17:05.424046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f8cfbb668-2nz5c" event={"ID":"1044e6f4-8331-45b2-b130-aee982e7c595","Type":"ContainerStarted","Data":"9c649512c519bc5cdacd16c3cfde6fab18118dc90e23693912e36bc501db5b34"} Feb 16 21:17:05 crc kubenswrapper[4805]: I0216 21:17:05.424058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f8cfbb668-2nz5c" event={"ID":"1044e6f4-8331-45b2-b130-aee982e7c595","Type":"ContainerStarted","Data":"6e3e954677838324a209772f1fed1573d2d783fd287a76c06ab8c51078949480"} Feb 16 21:17:05 crc kubenswrapper[4805]: I0216 21:17:05.424205 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:05 crc kubenswrapper[4805]: I0216 21:17:05.447641 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f8cfbb668-2nz5c" podStartSLOduration=2.447612678 podStartE2EDuration="2.447612678s" podCreationTimestamp="2026-02-16 21:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:05.443194774 +0000 UTC m=+1243.261878089" watchObservedRunningTime="2026-02-16 21:17:05.447612678 +0000 UTC m=+1243.266295973" Feb 16 21:17:05 crc kubenswrapper[4805]: I0216 21:17:05.978377 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:17:06 crc kubenswrapper[4805]: I0216 21:17:06.017486 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5958664456-5mzsf" Feb 16 21:17:06 crc kubenswrapper[4805]: I0216 21:17:06.088478 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b75669fb6-kkmnk"] Feb 16 21:17:06 crc kubenswrapper[4805]: I0216 21:17:06.088704 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b75669fb6-kkmnk" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api-log" containerID="cri-o://a50854e9179ad1f7e3a7a201999f56f82029dd5d0a4a66f258a481e618572a44" gracePeriod=30 Feb 16 21:17:06 crc kubenswrapper[4805]: I0216 21:17:06.089127 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b75669fb6-kkmnk" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api" containerID="cri-o://1b9489d5ee02cba3db8dce72c2f23a796fb4f8a1fe44228d8f719142bf8df113" gracePeriod=30 Feb 16 21:17:06 crc kubenswrapper[4805]: I0216 21:17:06.435306 4805 generic.go:334] "Generic (PLEG): container finished" podID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerID="a50854e9179ad1f7e3a7a201999f56f82029dd5d0a4a66f258a481e618572a44" exitCode=143 Feb 16 21:17:06 crc kubenswrapper[4805]: I0216 21:17:06.435380 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b75669fb6-kkmnk" event={"ID":"f401223c-dc9e-4cf2-b86e-9888a86f2a03","Type":"ContainerDied","Data":"a50854e9179ad1f7e3a7a201999f56f82029dd5d0a4a66f258a481e618572a44"} Feb 16 21:17:06 crc kubenswrapper[4805]: I0216 21:17:06.437817 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23560f03-f6f6-48a5-9d10-b797e3d8042e","Type":"ContainerStarted","Data":"4a56dd138240935e4eed350372ce91c3442e46f798fa75af512ab73481c8028b"} Feb 16 21:17:06 crc kubenswrapper[4805]: I0216 21:17:06.438038 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:06 crc kubenswrapper[4805]: I0216 21:17:06.489922 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.489902405 podStartE2EDuration="3.489902405s" podCreationTimestamp="2026-02-16 21:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:06.454204478 +0000 UTC m=+1244.272887783" watchObservedRunningTime="2026-02-16 21:17:06.489902405 +0000 UTC m=+1244.308585700" Feb 16 21:17:08 crc kubenswrapper[4805]: I0216 21:17:08.099450 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:17:08 crc kubenswrapper[4805]: I0216 21:17:08.099919 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:17:08 crc kubenswrapper[4805]: I0216 21:17:08.786564 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.274611 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b75669fb6-kkmnk" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": read tcp 10.217.0.2:44024->10.217.0.200:9311: read: connection reset by peer" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.275257 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b75669fb6-kkmnk" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": read tcp 10.217.0.2:44028->10.217.0.200:9311: read: connection reset by peer" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.440938 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-747f5c598c-x2pl7" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.485080 4805 generic.go:334] "Generic (PLEG): container finished" podID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerID="1b9489d5ee02cba3db8dce72c2f23a796fb4f8a1fe44228d8f719142bf8df113" exitCode=0 Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.485118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b75669fb6-kkmnk" event={"ID":"f401223c-dc9e-4cf2-b86e-9888a86f2a03","Type":"ContainerDied","Data":"1b9489d5ee02cba3db8dce72c2f23a796fb4f8a1fe44228d8f719142bf8df113"} Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.830253 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.855329 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data\") pod \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.855479 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f401223c-dc9e-4cf2-b86e-9888a86f2a03-logs\") pod \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.855520 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4pmw\" (UniqueName: \"kubernetes.io/projected/f401223c-dc9e-4cf2-b86e-9888a86f2a03-kube-api-access-z4pmw\") pod \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.855543 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-combined-ca-bundle\") pod \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.855600 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data-custom\") pod \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\" (UID: \"f401223c-dc9e-4cf2-b86e-9888a86f2a03\") " Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.857159 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f401223c-dc9e-4cf2-b86e-9888a86f2a03-logs" (OuterVolumeSpecName: "logs") pod "f401223c-dc9e-4cf2-b86e-9888a86f2a03" (UID: "f401223c-dc9e-4cf2-b86e-9888a86f2a03"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.868548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f401223c-dc9e-4cf2-b86e-9888a86f2a03" (UID: "f401223c-dc9e-4cf2-b86e-9888a86f2a03"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.885001 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f401223c-dc9e-4cf2-b86e-9888a86f2a03-kube-api-access-z4pmw" (OuterVolumeSpecName: "kube-api-access-z4pmw") pod "f401223c-dc9e-4cf2-b86e-9888a86f2a03" (UID: "f401223c-dc9e-4cf2-b86e-9888a86f2a03"). InnerVolumeSpecName "kube-api-access-z4pmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.921131 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f401223c-dc9e-4cf2-b86e-9888a86f2a03" (UID: "f401223c-dc9e-4cf2-b86e-9888a86f2a03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.936810 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data" (OuterVolumeSpecName: "config-data") pod "f401223c-dc9e-4cf2-b86e-9888a86f2a03" (UID: "f401223c-dc9e-4cf2-b86e-9888a86f2a03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.958501 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.958542 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.958554 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f401223c-dc9e-4cf2-b86e-9888a86f2a03-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.958566 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4pmw\" (UniqueName: \"kubernetes.io/projected/f401223c-dc9e-4cf2-b86e-9888a86f2a03-kube-api-access-z4pmw\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:09 crc kubenswrapper[4805]: I0216 21:17:09.958578 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f401223c-dc9e-4cf2-b86e-9888a86f2a03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:10 crc kubenswrapper[4805]: I0216 21:17:10.495197 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b75669fb6-kkmnk" event={"ID":"f401223c-dc9e-4cf2-b86e-9888a86f2a03","Type":"ContainerDied","Data":"8ff97d9900267491cdda3219cddc752753a02e59685d099f2e9196f942531419"} Feb 16 21:17:10 crc kubenswrapper[4805]: I0216 21:17:10.495441 4805 scope.go:117] "RemoveContainer" containerID="1b9489d5ee02cba3db8dce72c2f23a796fb4f8a1fe44228d8f719142bf8df113" Feb 16 21:17:10 crc kubenswrapper[4805]: I0216 21:17:10.495276 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b75669fb6-kkmnk" Feb 16 21:17:10 crc kubenswrapper[4805]: I0216 21:17:10.518394 4805 scope.go:117] "RemoveContainer" containerID="a50854e9179ad1f7e3a7a201999f56f82029dd5d0a4a66f258a481e618572a44" Feb 16 21:17:10 crc kubenswrapper[4805]: I0216 21:17:10.541925 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b75669fb6-kkmnk"] Feb 16 21:17:10 crc kubenswrapper[4805]: I0216 21:17:10.559521 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b75669fb6-kkmnk"] Feb 16 21:17:11 crc kubenswrapper[4805]: I0216 21:17:11.618067 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" path="/var/lib/kubelet/pods/f401223c-dc9e-4cf2-b86e-9888a86f2a03/volumes" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.156215 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 21:17:13 crc kubenswrapper[4805]: E0216 21:17:13.157194 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api-log" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.157212 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api-log" Feb 16 21:17:13 crc kubenswrapper[4805]: E0216 21:17:13.157232 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.157240 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.157485 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api-log" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.157530 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f401223c-dc9e-4cf2-b86e-9888a86f2a03" containerName="barbican-api" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.158516 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.160473 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.160669 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vtg2w" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.164040 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.183254 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.333533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-openstack-config\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.333606 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.333738 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.333818 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt4m9\" (UniqueName: \"kubernetes.io/projected/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-kube-api-access-vt4m9\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.435800 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.435979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt4m9\" (UniqueName: \"kubernetes.io/projected/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-kube-api-access-vt4m9\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.436221 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-openstack-config\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.436304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.437703 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-openstack-config\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.445224 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.448155 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.460265 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt4m9\" (UniqueName: \"kubernetes.io/projected/1d4f5d67-11fe-406b-ac3d-48fb09f5a513-kube-api-access-vt4m9\") pod \"openstackclient\" (UID: \"1d4f5d67-11fe-406b-ac3d-48fb09f5a513\") " pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.491793 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:17:13 crc kubenswrapper[4805]: I0216 21:17:13.991899 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 21:17:14 crc kubenswrapper[4805]: I0216 21:17:14.073712 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 21:17:14 crc kubenswrapper[4805]: I0216 21:17:14.578773 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d4f5d67-11fe-406b-ac3d-48fb09f5a513","Type":"ContainerStarted","Data":"cc077ae892415e703c68ae772ea52b12620fa86e1c87552c31851b4c1a97b976"} Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.668005 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7688d557bc-2jgzd"] Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.670287 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.672312 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.672312 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.673436 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.684150 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7688d557bc-2jgzd"] Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.818880 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-etc-swift\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.819015 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-config-data\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.819048 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-run-httpd\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.819184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4gh\" (UniqueName: \"kubernetes.io/projected/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-kube-api-access-bj4gh\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.819275 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-combined-ca-bundle\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.819426 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-public-tls-certs\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.819525 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-internal-tls-certs\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.819559 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-log-httpd\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.921745 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-internal-tls-certs\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.921792 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-log-httpd\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.921878 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-etc-swift\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.921918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-run-httpd\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.921957 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-config-data\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.922024 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj4gh\" (UniqueName: \"kubernetes.io/projected/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-kube-api-access-bj4gh\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.922073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-combined-ca-bundle\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.922131 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-public-tls-certs\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.922488 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-log-httpd\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.923572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-run-httpd\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.928901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-public-tls-certs\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.930886 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-etc-swift\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.931546 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-config-data\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.931710 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-internal-tls-certs\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.932127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-combined-ca-bundle\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.947393 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj4gh\" (UniqueName: \"kubernetes.io/projected/95ea5d76-aedb-4a0a-a03d-fdc9140265e4-kube-api-access-bj4gh\") pod \"swift-proxy-7688d557bc-2jgzd\" (UID: \"95ea5d76-aedb-4a0a-a03d-fdc9140265e4\") " pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:16 crc kubenswrapper[4805]: I0216 21:17:16.990964 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:17 crc kubenswrapper[4805]: I0216 21:17:17.696391 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7688d557bc-2jgzd"] Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.248156 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.273616 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-combined-ca-bundle\") pod \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.273712 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-run-httpd\") pod \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.273826 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-log-httpd\") pod \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.273881 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-scripts\") pod \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.273989 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-sg-core-conf-yaml\") pod \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.274031 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r79f8\" (UniqueName: \"kubernetes.io/projected/104ec6b3-3a02-486e-8948-0aeb16bbddd8-kube-api-access-r79f8\") pod \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.274134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-config-data\") pod \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\" (UID: \"104ec6b3-3a02-486e-8948-0aeb16bbddd8\") " Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.274671 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "104ec6b3-3a02-486e-8948-0aeb16bbddd8" (UID: "104ec6b3-3a02-486e-8948-0aeb16bbddd8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.274843 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.274961 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "104ec6b3-3a02-486e-8948-0aeb16bbddd8" (UID: "104ec6b3-3a02-486e-8948-0aeb16bbddd8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.279617 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-scripts" (OuterVolumeSpecName: "scripts") pod "104ec6b3-3a02-486e-8948-0aeb16bbddd8" (UID: "104ec6b3-3a02-486e-8948-0aeb16bbddd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.285956 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104ec6b3-3a02-486e-8948-0aeb16bbddd8-kube-api-access-r79f8" (OuterVolumeSpecName: "kube-api-access-r79f8") pod "104ec6b3-3a02-486e-8948-0aeb16bbddd8" (UID: "104ec6b3-3a02-486e-8948-0aeb16bbddd8"). InnerVolumeSpecName "kube-api-access-r79f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.374404 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "104ec6b3-3a02-486e-8948-0aeb16bbddd8" (UID: "104ec6b3-3a02-486e-8948-0aeb16bbddd8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.376741 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/104ec6b3-3a02-486e-8948-0aeb16bbddd8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.376782 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.376796 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.376824 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r79f8\" (UniqueName: \"kubernetes.io/projected/104ec6b3-3a02-486e-8948-0aeb16bbddd8-kube-api-access-r79f8\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.393753 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "104ec6b3-3a02-486e-8948-0aeb16bbddd8" (UID: "104ec6b3-3a02-486e-8948-0aeb16bbddd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.460263 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-config-data" (OuterVolumeSpecName: "config-data") pod "104ec6b3-3a02-486e-8948-0aeb16bbddd8" (UID: "104ec6b3-3a02-486e-8948-0aeb16bbddd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.480280 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.480321 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/104ec6b3-3a02-486e-8948-0aeb16bbddd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.632130 4805 generic.go:334] "Generic (PLEG): container finished" podID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerID="3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561" exitCode=137 Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.632222 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerDied","Data":"3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561"} Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.632250 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"104ec6b3-3a02-486e-8948-0aeb16bbddd8","Type":"ContainerDied","Data":"16f230cc4b46c4ed7af82302fd0cfe6ffcbaaed5e7483ad636ecbd747862ac28"} Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.632267 4805 scope.go:117] "RemoveContainer" containerID="3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.632403 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.635980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7688d557bc-2jgzd" event={"ID":"95ea5d76-aedb-4a0a-a03d-fdc9140265e4","Type":"ContainerStarted","Data":"538cbdbdc317091766999162e092cd69803840be6f73fb5fb47e56e11c776e04"} Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.636018 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7688d557bc-2jgzd" event={"ID":"95ea5d76-aedb-4a0a-a03d-fdc9140265e4","Type":"ContainerStarted","Data":"a6f7da38de780a4645539763ce0f552d7ec6973b6b0fa3b314719f015be89f76"} Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.636031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7688d557bc-2jgzd" event={"ID":"95ea5d76-aedb-4a0a-a03d-fdc9140265e4","Type":"ContainerStarted","Data":"d9c7e62e1c892a3c9adcab9d6b4296d01c831c75998b7298ca5f556005b50db2"} Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.636244 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.636341 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.663693 4805 scope.go:117] "RemoveContainer" containerID="8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.677616 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7688d557bc-2jgzd" podStartSLOduration=2.677598399 podStartE2EDuration="2.677598399s" podCreationTimestamp="2026-02-16 21:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:18.66621459 +0000 UTC m=+1256.484897885" watchObservedRunningTime="2026-02-16 21:17:18.677598399 +0000 UTC m=+1256.496281694" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.706498 4805 scope.go:117] "RemoveContainer" containerID="ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.709814 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.736773 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.751882 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.752132 4805 scope.go:117] "RemoveContainer" containerID="2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0" Feb 16 21:17:18 crc kubenswrapper[4805]: E0216 21:17:18.752586 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="ceilometer-notification-agent" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.752608 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="ceilometer-notification-agent" Feb 16 21:17:18 crc kubenswrapper[4805]: E0216 21:17:18.752631 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="sg-core" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.752639 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="sg-core" Feb 16 21:17:18 crc kubenswrapper[4805]: E0216 21:17:18.752667 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="ceilometer-central-agent" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.752677 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="ceilometer-central-agent" Feb 16 21:17:18 crc kubenswrapper[4805]: E0216 21:17:18.752714 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="proxy-httpd" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.752735 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="proxy-httpd" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.753071 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="proxy-httpd" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.753101 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="ceilometer-central-agent" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.753118 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="ceilometer-notification-agent" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.753139 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" containerName="sg-core" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.756799 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.763443 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.763664 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.781879 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.790000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-config-data\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.790085 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhkg\" (UniqueName: \"kubernetes.io/projected/de576413-a2f4-4407-9fbe-39e5ca9b9768-kube-api-access-2mhkg\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.790171 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.790219 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.790788 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-log-httpd\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.790952 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-scripts\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.791121 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-run-httpd\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.809614 4805 scope.go:117] "RemoveContainer" containerID="3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561" Feb 16 21:17:18 crc kubenswrapper[4805]: E0216 21:17:18.811404 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561\": container with ID starting with 3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561 not found: ID does not exist" containerID="3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.811444 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561"} err="failed to get container status \"3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561\": rpc error: code = NotFound desc = could not find container \"3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561\": container with ID starting with 3af678819f089dd52b5600ca7c7b2cb14a8d09234fdd7a500c4aef6fdd165561 not found: ID does not exist" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.811467 4805 scope.go:117] "RemoveContainer" containerID="8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685" Feb 16 21:17:18 crc kubenswrapper[4805]: E0216 21:17:18.811854 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685\": container with ID starting with 8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685 not found: ID does not exist" containerID="8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.811875 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685"} err="failed to get container status \"8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685\": rpc error: code = NotFound desc = could not find container \"8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685\": container with ID starting with 8954d81505bae0ee4787c2adf22c5cc32ec901e7eddd02df9d9fb1c5f6af9685 not found: ID does not exist" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.811890 4805 scope.go:117] "RemoveContainer" containerID="ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b" Feb 16 21:17:18 crc kubenswrapper[4805]: E0216 21:17:18.813552 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b\": container with ID starting with ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b not found: ID does not exist" containerID="ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.813580 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b"} err="failed to get container status \"ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b\": rpc error: code = NotFound desc = could not find container \"ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b\": container with ID starting with ac428721379980ab08216c0629c0b3cf5c5c4e5f4a657656dbf79bd1d678782b not found: ID does not exist" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.813593 4805 scope.go:117] "RemoveContainer" containerID="2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0" Feb 16 21:17:18 crc kubenswrapper[4805]: E0216 21:17:18.814361 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0\": container with ID starting with 2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0 not found: ID does not exist" containerID="2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.814384 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0"} err="failed to get container status \"2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0\": rpc error: code = NotFound desc = could not find container \"2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0\": container with ID starting with 2a0415676469e3b1cbb14999fce6f3b6ad0ad93935be19d719481b9643a0c4f0 not found: ID does not exist" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.893961 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-config-data\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.894052 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhkg\" (UniqueName: \"kubernetes.io/projected/de576413-a2f4-4407-9fbe-39e5ca9b9768-kube-api-access-2mhkg\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.894164 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.894188 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.894227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-log-httpd\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.894271 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-scripts\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.894325 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-run-httpd\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.894978 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-run-httpd\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.895072 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-log-httpd\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.899611 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.902443 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-config-data\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.904422 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.905999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-scripts\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:18 crc kubenswrapper[4805]: I0216 21:17:18.911001 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhkg\" (UniqueName: \"kubernetes.io/projected/de576413-a2f4-4407-9fbe-39e5ca9b9768-kube-api-access-2mhkg\") pod \"ceilometer-0\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " pod="openstack/ceilometer-0" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.086519 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.372439 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xckfh"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.373974 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.387051 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xckfh"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.411934 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7678f4b8-da9a-4032-a853-1ec0ec5386c0-operator-scripts\") pod \"nova-api-db-create-xckfh\" (UID: \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\") " pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.411990 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjcwk\" (UniqueName: \"kubernetes.io/projected/7678f4b8-da9a-4032-a853-1ec0ec5386c0-kube-api-access-zjcwk\") pod \"nova-api-db-create-xckfh\" (UID: \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\") " pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.490747 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-tt79f"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.492119 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.510524 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-11b9-account-create-update-92pwx"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.512131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.513578 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7678f4b8-da9a-4032-a853-1ec0ec5386c0-operator-scripts\") pod \"nova-api-db-create-xckfh\" (UID: \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\") " pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.513644 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjcwk\" (UniqueName: \"kubernetes.io/projected/7678f4b8-da9a-4032-a853-1ec0ec5386c0-kube-api-access-zjcwk\") pod \"nova-api-db-create-xckfh\" (UID: \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\") " pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.514761 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7678f4b8-da9a-4032-a853-1ec0ec5386c0-operator-scripts\") pod \"nova-api-db-create-xckfh\" (UID: \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\") " pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.519015 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.525165 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-11b9-account-create-update-92pwx"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.539776 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tt79f"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.547539 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjcwk\" (UniqueName: \"kubernetes.io/projected/7678f4b8-da9a-4032-a853-1ec0ec5386c0-kube-api-access-zjcwk\") pod \"nova-api-db-create-xckfh\" (UID: \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\") " pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.584407 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-r8r58"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.585971 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.592554 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-r8r58"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.616402 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xddj\" (UniqueName: \"kubernetes.io/projected/aa73323e-8833-4118-8d6b-f6de2261b33c-kube-api-access-9xddj\") pod \"nova-api-11b9-account-create-update-92pwx\" (UID: \"aa73323e-8833-4118-8d6b-f6de2261b33c\") " pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.616454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk6km\" (UniqueName: \"kubernetes.io/projected/8a5da24d-77bc-444c-a344-c811c1430ea8-kube-api-access-wk6km\") pod \"nova-cell1-db-create-r8r58\" (UID: \"8a5da24d-77bc-444c-a344-c811c1430ea8\") " pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.616508 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46vd\" (UniqueName: \"kubernetes.io/projected/dc7c489f-b7a1-4cc3-827a-3de24bd86115-kube-api-access-f46vd\") pod \"nova-cell0-db-create-tt79f\" (UID: \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\") " pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.616597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7c489f-b7a1-4cc3-827a-3de24bd86115-operator-scripts\") pod \"nova-cell0-db-create-tt79f\" (UID: \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\") " pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.616634 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5da24d-77bc-444c-a344-c811c1430ea8-operator-scripts\") pod \"nova-cell1-db-create-r8r58\" (UID: \"8a5da24d-77bc-444c-a344-c811c1430ea8\") " pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.616707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa73323e-8833-4118-8d6b-f6de2261b33c-operator-scripts\") pod \"nova-api-11b9-account-create-update-92pwx\" (UID: \"aa73323e-8833-4118-8d6b-f6de2261b33c\") " pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:19 crc kubenswrapper[4805]: W0216 21:17:19.646300 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde576413_a2f4_4407_9fbe_39e5ca9b9768.slice/crio-b315bdd1b7d83ad0c121a8863ce2bac5ccd2b50b2ff8dbc38f7b17fe32515f46 WatchSource:0}: Error finding container b315bdd1b7d83ad0c121a8863ce2bac5ccd2b50b2ff8dbc38f7b17fe32515f46: Status 404 returned error can't find the container with id b315bdd1b7d83ad0c121a8863ce2bac5ccd2b50b2ff8dbc38f7b17fe32515f46 Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.655683 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104ec6b3-3a02-486e-8948-0aeb16bbddd8" path="/var/lib/kubelet/pods/104ec6b3-3a02-486e-8948-0aeb16bbddd8/volumes" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.657270 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.718369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7c489f-b7a1-4cc3-827a-3de24bd86115-operator-scripts\") pod \"nova-cell0-db-create-tt79f\" (UID: \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\") " pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.718712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5da24d-77bc-444c-a344-c811c1430ea8-operator-scripts\") pod \"nova-cell1-db-create-r8r58\" (UID: \"8a5da24d-77bc-444c-a344-c811c1430ea8\") " pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.718924 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa73323e-8833-4118-8d6b-f6de2261b33c-operator-scripts\") pod \"nova-api-11b9-account-create-update-92pwx\" (UID: \"aa73323e-8833-4118-8d6b-f6de2261b33c\") " pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.719073 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7c489f-b7a1-4cc3-827a-3de24bd86115-operator-scripts\") pod \"nova-cell0-db-create-tt79f\" (UID: \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\") " pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.722029 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5da24d-77bc-444c-a344-c811c1430ea8-operator-scripts\") pod \"nova-cell1-db-create-r8r58\" (UID: \"8a5da24d-77bc-444c-a344-c811c1430ea8\") " pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.722650 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.722698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xddj\" (UniqueName: \"kubernetes.io/projected/aa73323e-8833-4118-8d6b-f6de2261b33c-kube-api-access-9xddj\") pod \"nova-api-11b9-account-create-update-92pwx\" (UID: \"aa73323e-8833-4118-8d6b-f6de2261b33c\") " pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.722800 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk6km\" (UniqueName: \"kubernetes.io/projected/8a5da24d-77bc-444c-a344-c811c1430ea8-kube-api-access-wk6km\") pod \"nova-cell1-db-create-r8r58\" (UID: \"8a5da24d-77bc-444c-a344-c811c1430ea8\") " pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.723110 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46vd\" (UniqueName: \"kubernetes.io/projected/dc7c489f-b7a1-4cc3-827a-3de24bd86115-kube-api-access-f46vd\") pod \"nova-cell0-db-create-tt79f\" (UID: \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\") " pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.723194 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa73323e-8833-4118-8d6b-f6de2261b33c-operator-scripts\") pod \"nova-api-11b9-account-create-update-92pwx\" (UID: \"aa73323e-8833-4118-8d6b-f6de2261b33c\") " pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.748016 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xddj\" (UniqueName: \"kubernetes.io/projected/aa73323e-8833-4118-8d6b-f6de2261b33c-kube-api-access-9xddj\") pod \"nova-api-11b9-account-create-update-92pwx\" (UID: \"aa73323e-8833-4118-8d6b-f6de2261b33c\") " pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.751258 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46vd\" (UniqueName: \"kubernetes.io/projected/dc7c489f-b7a1-4cc3-827a-3de24bd86115-kube-api-access-f46vd\") pod \"nova-cell0-db-create-tt79f\" (UID: \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\") " pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.753767 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.765280 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-13af-account-create-update-2jbb4"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.765409 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk6km\" (UniqueName: \"kubernetes.io/projected/8a5da24d-77bc-444c-a344-c811c1430ea8-kube-api-access-wk6km\") pod \"nova-cell1-db-create-r8r58\" (UID: \"8a5da24d-77bc-444c-a344-c811c1430ea8\") " pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.767010 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.772090 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.788427 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-13af-account-create-update-2jbb4"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.817341 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.832344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/165cd002-0510-49a6-8322-5e2fe84e99c1-operator-scripts\") pod \"nova-cell0-13af-account-create-update-2jbb4\" (UID: \"165cd002-0510-49a6-8322-5e2fe84e99c1\") " pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.832388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtm9k\" (UniqueName: \"kubernetes.io/projected/165cd002-0510-49a6-8322-5e2fe84e99c1-kube-api-access-qtm9k\") pod \"nova-cell0-13af-account-create-update-2jbb4\" (UID: \"165cd002-0510-49a6-8322-5e2fe84e99c1\") " pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.835942 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.877789 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1975-account-create-update-sbmx2"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.879301 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.882383 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.917928 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1975-account-create-update-sbmx2"] Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.934143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gzw4\" (UniqueName: \"kubernetes.io/projected/af9beaaa-c93c-4f38-93d4-c86d4156ad44-kube-api-access-2gzw4\") pod \"nova-cell1-1975-account-create-update-sbmx2\" (UID: \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\") " pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.934496 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af9beaaa-c93c-4f38-93d4-c86d4156ad44-operator-scripts\") pod \"nova-cell1-1975-account-create-update-sbmx2\" (UID: \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\") " pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.934632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/165cd002-0510-49a6-8322-5e2fe84e99c1-operator-scripts\") pod \"nova-cell0-13af-account-create-update-2jbb4\" (UID: \"165cd002-0510-49a6-8322-5e2fe84e99c1\") " pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.934707 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtm9k\" (UniqueName: \"kubernetes.io/projected/165cd002-0510-49a6-8322-5e2fe84e99c1-kube-api-access-qtm9k\") pod \"nova-cell0-13af-account-create-update-2jbb4\" (UID: \"165cd002-0510-49a6-8322-5e2fe84e99c1\") " pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.935909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/165cd002-0510-49a6-8322-5e2fe84e99c1-operator-scripts\") pod \"nova-cell0-13af-account-create-update-2jbb4\" (UID: \"165cd002-0510-49a6-8322-5e2fe84e99c1\") " pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.960157 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtm9k\" (UniqueName: \"kubernetes.io/projected/165cd002-0510-49a6-8322-5e2fe84e99c1-kube-api-access-qtm9k\") pod \"nova-cell0-13af-account-create-update-2jbb4\" (UID: \"165cd002-0510-49a6-8322-5e2fe84e99c1\") " pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:19 crc kubenswrapper[4805]: I0216 21:17:19.982350 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:20 crc kubenswrapper[4805]: I0216 21:17:20.038002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gzw4\" (UniqueName: \"kubernetes.io/projected/af9beaaa-c93c-4f38-93d4-c86d4156ad44-kube-api-access-2gzw4\") pod \"nova-cell1-1975-account-create-update-sbmx2\" (UID: \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\") " pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:20 crc kubenswrapper[4805]: I0216 21:17:20.038065 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af9beaaa-c93c-4f38-93d4-c86d4156ad44-operator-scripts\") pod \"nova-cell1-1975-account-create-update-sbmx2\" (UID: \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\") " pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:20 crc kubenswrapper[4805]: I0216 21:17:20.038759 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af9beaaa-c93c-4f38-93d4-c86d4156ad44-operator-scripts\") pod \"nova-cell1-1975-account-create-update-sbmx2\" (UID: \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\") " pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:20 crc kubenswrapper[4805]: I0216 21:17:20.061328 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gzw4\" (UniqueName: \"kubernetes.io/projected/af9beaaa-c93c-4f38-93d4-c86d4156ad44-kube-api-access-2gzw4\") pod \"nova-cell1-1975-account-create-update-sbmx2\" (UID: \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\") " pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:20 crc kubenswrapper[4805]: I0216 21:17:20.090177 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:20 crc kubenswrapper[4805]: I0216 21:17:20.195385 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:20 crc kubenswrapper[4805]: I0216 21:17:20.694478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerStarted","Data":"b315bdd1b7d83ad0c121a8863ce2bac5ccd2b50b2ff8dbc38f7b17fe32515f46"} Feb 16 21:17:23 crc kubenswrapper[4805]: I0216 21:17:23.113176 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:17:23 crc kubenswrapper[4805]: I0216 21:17:23.113874 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" containerName="glance-log" containerID="cri-o://c14b3f1c62e3d753fdb59ba073293810c918aa4b0b3b65bdcb067df76e691889" gracePeriod=30 Feb 16 21:17:23 crc kubenswrapper[4805]: I0216 21:17:23.114016 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" containerName="glance-httpd" containerID="cri-o://3086fe672dbe9a3c4c5163c1bcaa203abca8433d121980e3ed65dce2dd71734d" gracePeriod=30 Feb 16 21:17:23 crc kubenswrapper[4805]: I0216 21:17:23.759508 4805 generic.go:334] "Generic (PLEG): container finished" podID="7352be72-3bf9-4377-a713-ab6058b6785f" containerID="c14b3f1c62e3d753fdb59ba073293810c918aa4b0b3b65bdcb067df76e691889" exitCode=143 Feb 16 21:17:23 crc kubenswrapper[4805]: I0216 21:17:23.759743 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7352be72-3bf9-4377-a713-ab6058b6785f","Type":"ContainerDied","Data":"c14b3f1c62e3d753fdb59ba073293810c918aa4b0b3b65bdcb067df76e691889"} Feb 16 21:17:24 crc kubenswrapper[4805]: I0216 21:17:24.298305 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="baa0398e-38fe-456c-8456-53c083f8e121" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.203:8776/healthcheck\": dial tcp 10.217.0.203:8776: connect: connection refused" Feb 16 21:17:24 crc kubenswrapper[4805]: I0216 21:17:24.624153 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8fbb985b9-2x2rd" Feb 16 21:17:24 crc kubenswrapper[4805]: I0216 21:17:24.709692 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8d5dc9954-x56z5"] Feb 16 21:17:24 crc kubenswrapper[4805]: I0216 21:17:24.710808 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8d5dc9954-x56z5" podUID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerName="neutron-api" containerID="cri-o://705098690a2e635d29fb6f5dc24c44c61f6c3b4a6030bfd52b4d56c668dfd944" gracePeriod=30 Feb 16 21:17:24 crc kubenswrapper[4805]: I0216 21:17:24.711384 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8d5dc9954-x56z5" podUID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerName="neutron-httpd" containerID="cri-o://2000ac46f12288b12f5899a913f096ea75601e47b6a656ee2c1776eba5b0c7f9" gracePeriod=30 Feb 16 21:17:24 crc kubenswrapper[4805]: I0216 21:17:24.776655 4805 generic.go:334] "Generic (PLEG): container finished" podID="baa0398e-38fe-456c-8456-53c083f8e121" containerID="aa197270c0589e86022a95f6f1f990279686b5f941c4aef6c69ffde642ad2338" exitCode=137 Feb 16 21:17:24 crc kubenswrapper[4805]: I0216 21:17:24.776697 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baa0398e-38fe-456c-8456-53c083f8e121","Type":"ContainerDied","Data":"aa197270c0589e86022a95f6f1f990279686b5f941c4aef6c69ffde642ad2338"} Feb 16 21:17:25 crc kubenswrapper[4805]: I0216 21:17:25.762867 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:17:25 crc kubenswrapper[4805]: I0216 21:17:25.763195 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="33763587-13b0-4c1c-af15-3164866a25aa" containerName="glance-log" containerID="cri-o://579b13e4e98a86f06fa8ce1c9461bcbdd689cbd24875db5359c3d75ac249fae8" gracePeriod=30 Feb 16 21:17:25 crc kubenswrapper[4805]: I0216 21:17:25.763271 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="33763587-13b0-4c1c-af15-3164866a25aa" containerName="glance-httpd" containerID="cri-o://9ff579908d254c88c054d1ce4b4c1d0251e0a4fa1ca6a552c83d4f094af1a0a3" gracePeriod=30 Feb 16 21:17:25 crc kubenswrapper[4805]: I0216 21:17:25.794298 4805 generic.go:334] "Generic (PLEG): container finished" podID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerID="2000ac46f12288b12f5899a913f096ea75601e47b6a656ee2c1776eba5b0c7f9" exitCode=0 Feb 16 21:17:25 crc kubenswrapper[4805]: I0216 21:17:25.794339 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d5dc9954-x56z5" event={"ID":"37e4f0f1-8158-409b-95a0-12826bddebc2","Type":"ContainerDied","Data":"2000ac46f12288b12f5899a913f096ea75601e47b6a656ee2c1776eba5b0c7f9"} Feb 16 21:17:26 crc kubenswrapper[4805]: I0216 21:17:26.806790 4805 generic.go:334] "Generic (PLEG): container finished" podID="7352be72-3bf9-4377-a713-ab6058b6785f" containerID="3086fe672dbe9a3c4c5163c1bcaa203abca8433d121980e3ed65dce2dd71734d" exitCode=0 Feb 16 21:17:26 crc kubenswrapper[4805]: I0216 21:17:26.807054 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7352be72-3bf9-4377-a713-ab6058b6785f","Type":"ContainerDied","Data":"3086fe672dbe9a3c4c5163c1bcaa203abca8433d121980e3ed65dce2dd71734d"} Feb 16 21:17:26 crc kubenswrapper[4805]: I0216 21:17:26.808296 4805 generic.go:334] "Generic (PLEG): container finished" podID="33763587-13b0-4c1c-af15-3164866a25aa" containerID="579b13e4e98a86f06fa8ce1c9461bcbdd689cbd24875db5359c3d75ac249fae8" exitCode=143 Feb 16 21:17:26 crc kubenswrapper[4805]: I0216 21:17:26.808315 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"33763587-13b0-4c1c-af15-3164866a25aa","Type":"ContainerDied","Data":"579b13e4e98a86f06fa8ce1c9461bcbdd689cbd24875db5359c3d75ac249fae8"} Feb 16 21:17:26 crc kubenswrapper[4805]: I0216 21:17:26.999300 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.000255 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7688d557bc-2jgzd" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.715011 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.827807 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baa0398e-38fe-456c-8456-53c083f8e121","Type":"ContainerDied","Data":"a03894e9ac20d05abab769c09b75839aa9c9734261a0562f056257c6e863e766"} Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.828095 4805 scope.go:117] "RemoveContainer" containerID="aa197270c0589e86022a95f6f1f990279686b5f941c4aef6c69ffde642ad2338" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.827840 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.831333 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data-custom\") pod \"baa0398e-38fe-456c-8456-53c083f8e121\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.831422 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djscf\" (UniqueName: \"kubernetes.io/projected/baa0398e-38fe-456c-8456-53c083f8e121-kube-api-access-djscf\") pod \"baa0398e-38fe-456c-8456-53c083f8e121\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.831499 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-scripts\") pod \"baa0398e-38fe-456c-8456-53c083f8e121\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.831606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data\") pod \"baa0398e-38fe-456c-8456-53c083f8e121\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.831634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baa0398e-38fe-456c-8456-53c083f8e121-etc-machine-id\") pod \"baa0398e-38fe-456c-8456-53c083f8e121\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.831650 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-combined-ca-bundle\") pod \"baa0398e-38fe-456c-8456-53c083f8e121\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.831808 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa0398e-38fe-456c-8456-53c083f8e121-logs\") pod \"baa0398e-38fe-456c-8456-53c083f8e121\" (UID: \"baa0398e-38fe-456c-8456-53c083f8e121\") " Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.833315 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa0398e-38fe-456c-8456-53c083f8e121-logs" (OuterVolumeSpecName: "logs") pod "baa0398e-38fe-456c-8456-53c083f8e121" (UID: "baa0398e-38fe-456c-8456-53c083f8e121"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.833360 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa0398e-38fe-456c-8456-53c083f8e121-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "baa0398e-38fe-456c-8456-53c083f8e121" (UID: "baa0398e-38fe-456c-8456-53c083f8e121"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.841379 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "baa0398e-38fe-456c-8456-53c083f8e121" (UID: "baa0398e-38fe-456c-8456-53c083f8e121"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.842924 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-scripts" (OuterVolumeSpecName: "scripts") pod "baa0398e-38fe-456c-8456-53c083f8e121" (UID: "baa0398e-38fe-456c-8456-53c083f8e121"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.867917 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa0398e-38fe-456c-8456-53c083f8e121-kube-api-access-djscf" (OuterVolumeSpecName: "kube-api-access-djscf") pod "baa0398e-38fe-456c-8456-53c083f8e121" (UID: "baa0398e-38fe-456c-8456-53c083f8e121"). InnerVolumeSpecName "kube-api-access-djscf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.935057 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baa0398e-38fe-456c-8456-53c083f8e121-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.935081 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.935090 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djscf\" (UniqueName: \"kubernetes.io/projected/baa0398e-38fe-456c-8456-53c083f8e121-kube-api-access-djscf\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.935098 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.935107 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baa0398e-38fe-456c-8456-53c083f8e121-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.976917 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baa0398e-38fe-456c-8456-53c083f8e121" (UID: "baa0398e-38fe-456c-8456-53c083f8e121"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:27 crc kubenswrapper[4805]: I0216 21:17:27.977644 4805 scope.go:117] "RemoveContainer" containerID="b377e27b34bd0e6004ba931a6b5c3a9286cad9521a4c3fb7bf2a2522c386ae23" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.001192 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data" (OuterVolumeSpecName: "config-data") pod "baa0398e-38fe-456c-8456-53c083f8e121" (UID: "baa0398e-38fe-456c-8456-53c083f8e121"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.047533 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.047652 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0398e-38fe-456c-8456-53c083f8e121-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.067521 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.154134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-scripts\") pod \"7352be72-3bf9-4377-a713-ab6058b6785f\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.154317 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-config-data\") pod \"7352be72-3bf9-4377-a713-ab6058b6785f\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.154539 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnmr9\" (UniqueName: \"kubernetes.io/projected/7352be72-3bf9-4377-a713-ab6058b6785f-kube-api-access-gnmr9\") pod \"7352be72-3bf9-4377-a713-ab6058b6785f\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.154799 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-public-tls-certs\") pod \"7352be72-3bf9-4377-a713-ab6058b6785f\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.154971 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-logs\") pod \"7352be72-3bf9-4377-a713-ab6058b6785f\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.155184 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-combined-ca-bundle\") pod \"7352be72-3bf9-4377-a713-ab6058b6785f\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.155505 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"7352be72-3bf9-4377-a713-ab6058b6785f\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.155856 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-httpd-run\") pod \"7352be72-3bf9-4377-a713-ab6058b6785f\" (UID: \"7352be72-3bf9-4377-a713-ab6058b6785f\") " Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.157266 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-logs" (OuterVolumeSpecName: "logs") pod "7352be72-3bf9-4377-a713-ab6058b6785f" (UID: "7352be72-3bf9-4377-a713-ab6058b6785f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.159508 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7352be72-3bf9-4377-a713-ab6058b6785f" (UID: "7352be72-3bf9-4377-a713-ab6058b6785f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.161397 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.167251 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-scripts" (OuterVolumeSpecName: "scripts") pod "7352be72-3bf9-4377-a713-ab6058b6785f" (UID: "7352be72-3bf9-4377-a713-ab6058b6785f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.171630 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7352be72-3bf9-4377-a713-ab6058b6785f-kube-api-access-gnmr9" (OuterVolumeSpecName: "kube-api-access-gnmr9") pod "7352be72-3bf9-4377-a713-ab6058b6785f" (UID: "7352be72-3bf9-4377-a713-ab6058b6785f"). InnerVolumeSpecName "kube-api-access-gnmr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.223541 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa" (OuterVolumeSpecName: "glance") pod "7352be72-3bf9-4377-a713-ab6058b6785f" (UID: "7352be72-3bf9-4377-a713-ab6058b6785f"). InnerVolumeSpecName "pvc-f93d250e-b474-4652-90b3-558818d0e8aa". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.225474 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7352be72-3bf9-4377-a713-ab6058b6785f" (UID: "7352be72-3bf9-4377-a713-ab6058b6785f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.263324 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.263361 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnmr9\" (UniqueName: \"kubernetes.io/projected/7352be72-3bf9-4377-a713-ab6058b6785f-kube-api-access-gnmr9\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.263373 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.263396 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") on node \"crc\" " Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.263405 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7352be72-3bf9-4377-a713-ab6058b6785f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.291305 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-r8r58"] Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.315603 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-11b9-account-create-update-92pwx"] Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.332886 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-config-data" (OuterVolumeSpecName: "config-data") pod "7352be72-3bf9-4377-a713-ab6058b6785f" (UID: "7352be72-3bf9-4377-a713-ab6058b6785f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.339961 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.340313 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.340485 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f93d250e-b474-4652-90b3-558818d0e8aa" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa") on node "crc" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.351923 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7352be72-3bf9-4377-a713-ab6058b6785f" (UID: "7352be72-3bf9-4377-a713-ab6058b6785f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.366870 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.366900 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.366912 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7352be72-3bf9-4377-a713-ab6058b6785f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.425635 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.435823 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.451632 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:17:28 crc kubenswrapper[4805]: E0216 21:17:28.452494 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" containerName="glance-httpd" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.452513 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" containerName="glance-httpd" Feb 16 21:17:28 crc kubenswrapper[4805]: E0216 21:17:28.452528 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa0398e-38fe-456c-8456-53c083f8e121" containerName="cinder-api" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.452535 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa0398e-38fe-456c-8456-53c083f8e121" containerName="cinder-api" Feb 16 21:17:28 crc kubenswrapper[4805]: E0216 21:17:28.452554 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" containerName="glance-log" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.452560 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" containerName="glance-log" Feb 16 21:17:28 crc kubenswrapper[4805]: E0216 21:17:28.452576 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa0398e-38fe-456c-8456-53c083f8e121" containerName="cinder-api-log" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.452582 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa0398e-38fe-456c-8456-53c083f8e121" containerName="cinder-api-log" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.452833 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" containerName="glance-log" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.452853 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa0398e-38fe-456c-8456-53c083f8e121" containerName="cinder-api-log" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.452878 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" containerName="glance-httpd" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.452897 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa0398e-38fe-456c-8456-53c083f8e121" containerName="cinder-api" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.454051 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.457821 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.457821 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.457870 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.483973 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.569864 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-config-data\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.569917 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.569959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-config-data-custom\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.569982 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xcr8\" (UniqueName: \"kubernetes.io/projected/f241b99d-b7d7-4897-9cfa-bd3201582861-kube-api-access-2xcr8\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.570066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f241b99d-b7d7-4897-9cfa-bd3201582861-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.570119 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f241b99d-b7d7-4897-9cfa-bd3201582861-logs\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.570172 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.570235 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-scripts\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.570282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.672850 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-config-data-custom\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.673208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xcr8\" (UniqueName: \"kubernetes.io/projected/f241b99d-b7d7-4897-9cfa-bd3201582861-kube-api-access-2xcr8\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.673325 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f241b99d-b7d7-4897-9cfa-bd3201582861-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.673384 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f241b99d-b7d7-4897-9cfa-bd3201582861-logs\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.673457 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.673561 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-scripts\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.673605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.673646 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-config-data\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.673684 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.676107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f241b99d-b7d7-4897-9cfa-bd3201582861-logs\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.679899 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.680051 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.680118 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f241b99d-b7d7-4897-9cfa-bd3201582861-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.680448 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-config-data\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.681071 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.681995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-config-data-custom\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.686098 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f241b99d-b7d7-4897-9cfa-bd3201582861-scripts\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.706842 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tt79f"] Feb 16 21:17:28 crc kubenswrapper[4805]: I0216 21:17:28.711051 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xcr8\" (UniqueName: \"kubernetes.io/projected/f241b99d-b7d7-4897-9cfa-bd3201582861-kube-api-access-2xcr8\") pod \"cinder-api-0\" (UID: \"f241b99d-b7d7-4897-9cfa-bd3201582861\") " pod="openstack/cinder-api-0" Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.816435 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.911228 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa0398e-38fe-456c-8456-53c083f8e121" path="/var/lib/kubelet/pods/baa0398e-38fe-456c-8456-53c083f8e121/volumes" Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.915210 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.917217 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1975-account-create-update-sbmx2"] Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.920131 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-13af-account-create-update-2jbb4"] Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.922875 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7352be72-3bf9-4377-a713-ab6058b6785f","Type":"ContainerDied","Data":"521d6f18f7391ae47914d8f4a7e39939a6af53348906124f106f5434a65b2153"} Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.923009 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xckfh"] Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.923128 4805 scope.go:117] "RemoveContainer" containerID="3086fe672dbe9a3c4c5163c1bcaa203abca8433d121980e3ed65dce2dd71734d" Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.938200 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.954303 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerStarted","Data":"15034a1d378a1bd9a0f6d471331bd3330700330ddc6bbf347c2e3a05f642b6cc"} Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.963636 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d4f5d67-11fe-406b-ac3d-48fb09f5a513","Type":"ContainerStarted","Data":"752a65876f476d3cc3237b050a9959519aaa3e16a3765dd74f78208ab274cf48"} Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.971392 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.972452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-11b9-account-create-update-92pwx" event={"ID":"aa73323e-8833-4118-8d6b-f6de2261b33c","Type":"ContainerStarted","Data":"a536dd3fada0af5e7b46bd3cbe4a8ff19bf2c0905a7d9dd1ef46d45baa78de49"} Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.973775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r8r58" event={"ID":"8a5da24d-77bc-444c-a344-c811c1430ea8","Type":"ContainerStarted","Data":"006d0bf3ba21631476b76f6484d50c51c16f70dbc791e36cc86020fc3a7c1571"} Feb 16 21:17:29 crc kubenswrapper[4805]: I0216 21:17:29.984236 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.608153607 podStartE2EDuration="16.984207856s" podCreationTimestamp="2026-02-16 21:17:13 +0000 UTC" firstStartedPulling="2026-02-16 21:17:14.087994439 +0000 UTC m=+1251.906677734" lastFinishedPulling="2026-02-16 21:17:27.464048688 +0000 UTC m=+1265.282731983" observedRunningTime="2026-02-16 21:17:29.978026323 +0000 UTC m=+1267.796709628" watchObservedRunningTime="2026-02-16 21:17:29.984207856 +0000 UTC m=+1267.802891151" Feb 16 21:17:30 crc kubenswrapper[4805]: I0216 21:17:30.456000 4805 scope.go:117] "RemoveContainer" containerID="c14b3f1c62e3d753fdb59ba073293810c918aa4b0b3b65bdcb067df76e691889" Feb 16 21:17:30 crc kubenswrapper[4805]: I0216 21:17:30.763495 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.003545 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r8r58" event={"ID":"8a5da24d-77bc-444c-a344-c811c1430ea8","Type":"ContainerStarted","Data":"1e4796f0357243266676f33412894b122e3852ff68bbbdfcce109d1d6908c0cc"} Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.008794 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-13af-account-create-update-2jbb4" event={"ID":"165cd002-0510-49a6-8322-5e2fe84e99c1","Type":"ContainerStarted","Data":"472fda4e879fd280b1a3723d26755f78898ec620f2139b4cbb65f3f08e152152"} Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.008833 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-13af-account-create-update-2jbb4" event={"ID":"165cd002-0510-49a6-8322-5e2fe84e99c1","Type":"ContainerStarted","Data":"10e3c56556162da87a2edc4dbecb711e9d0c62274cead62968c1fb446dc2dfdf"} Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.041826 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-13af-account-create-update-2jbb4" podStartSLOduration=12.041806541 podStartE2EDuration="12.041806541s" podCreationTimestamp="2026-02-16 21:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:31.036161733 +0000 UTC m=+1268.854845028" watchObservedRunningTime="2026-02-16 21:17:31.041806541 +0000 UTC m=+1268.860489836" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.050357 4805 generic.go:334] "Generic (PLEG): container finished" podID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerID="705098690a2e635d29fb6f5dc24c44c61f6c3b4a6030bfd52b4d56c668dfd944" exitCode=0 Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.050869 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d5dc9954-x56z5" event={"ID":"37e4f0f1-8158-409b-95a0-12826bddebc2","Type":"ContainerDied","Data":"705098690a2e635d29fb6f5dc24c44c61f6c3b4a6030bfd52b4d56c668dfd944"} Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.052837 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tt79f" event={"ID":"dc7c489f-b7a1-4cc3-827a-3de24bd86115","Type":"ContainerStarted","Data":"8943561148c42033ecc3da26a6e9ca4ba43e9cce4fff5357f6713d90dce50692"} Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.057948 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1975-account-create-update-sbmx2" event={"ID":"af9beaaa-c93c-4f38-93d4-c86d4156ad44","Type":"ContainerStarted","Data":"5f7c5295bd19ed6669d0ecaf8db53ded4c6f5cc1b18c5d143908aff1eb6d6298"} Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.072291 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xckfh" event={"ID":"7678f4b8-da9a-4032-a853-1ec0ec5386c0","Type":"ContainerStarted","Data":"e1c2bdcebf3439628e116b5240543f9e6238a27fee265db7fa938e29aaa0e5d2"} Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.091215 4805 generic.go:334] "Generic (PLEG): container finished" podID="33763587-13b0-4c1c-af15-3164866a25aa" containerID="9ff579908d254c88c054d1ce4b4c1d0251e0a4fa1ca6a552c83d4f094af1a0a3" exitCode=0 Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.091304 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"33763587-13b0-4c1c-af15-3164866a25aa","Type":"ContainerDied","Data":"9ff579908d254c88c054d1ce4b4c1d0251e0a4fa1ca6a552c83d4f094af1a0a3"} Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.123225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f241b99d-b7d7-4897-9cfa-bd3201582861","Type":"ContainerStarted","Data":"d4f2cfa1b4701ffba5c6a7bd504d965ee22bdd7f75bcf0e71fdc1e1d2825f7a5"} Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.144229 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-11b9-account-create-update-92pwx" podStartSLOduration=12.14421215 podStartE2EDuration="12.14421215s" podCreationTimestamp="2026-02-16 21:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:31.138905893 +0000 UTC m=+1268.957589188" watchObservedRunningTime="2026-02-16 21:17:31.14421215 +0000 UTC m=+1268.962895445" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.522755 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.572524 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.600551 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-ovndb-tls-certs\") pod \"37e4f0f1-8158-409b-95a0-12826bddebc2\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.600768 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-httpd-config\") pod \"37e4f0f1-8158-409b-95a0-12826bddebc2\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.600881 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-config\") pod \"37e4f0f1-8158-409b-95a0-12826bddebc2\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.600928 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h79x8\" (UniqueName: \"kubernetes.io/projected/37e4f0f1-8158-409b-95a0-12826bddebc2-kube-api-access-h79x8\") pod \"37e4f0f1-8158-409b-95a0-12826bddebc2\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.602405 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-combined-ca-bundle\") pod \"37e4f0f1-8158-409b-95a0-12826bddebc2\" (UID: \"37e4f0f1-8158-409b-95a0-12826bddebc2\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.648595 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "37e4f0f1-8158-409b-95a0-12826bddebc2" (UID: "37e4f0f1-8158-409b-95a0-12826bddebc2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.652149 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e4f0f1-8158-409b-95a0-12826bddebc2-kube-api-access-h79x8" (OuterVolumeSpecName: "kube-api-access-h79x8") pod "37e4f0f1-8158-409b-95a0-12826bddebc2" (UID: "37e4f0f1-8158-409b-95a0-12826bddebc2"). InnerVolumeSpecName "kube-api-access-h79x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.744658 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37e4f0f1-8158-409b-95a0-12826bddebc2" (UID: "37e4f0f1-8158-409b-95a0-12826bddebc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.745326 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-internal-tls-certs\") pod \"33763587-13b0-4c1c-af15-3164866a25aa\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.745416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-combined-ca-bundle\") pod \"33763587-13b0-4c1c-af15-3164866a25aa\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.745433 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-logs\") pod \"33763587-13b0-4c1c-af15-3164866a25aa\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.745532 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"33763587-13b0-4c1c-af15-3164866a25aa\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.745589 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-config-data\") pod \"33763587-13b0-4c1c-af15-3164866a25aa\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.745605 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-httpd-run\") pod \"33763587-13b0-4c1c-af15-3164866a25aa\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.745736 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-scripts\") pod \"33763587-13b0-4c1c-af15-3164866a25aa\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.745875 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zckgc\" (UniqueName: \"kubernetes.io/projected/33763587-13b0-4c1c-af15-3164866a25aa-kube-api-access-zckgc\") pod \"33763587-13b0-4c1c-af15-3164866a25aa\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.746418 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-logs" (OuterVolumeSpecName: "logs") pod "33763587-13b0-4c1c-af15-3164866a25aa" (UID: "33763587-13b0-4c1c-af15-3164866a25aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.747684 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h79x8\" (UniqueName: \"kubernetes.io/projected/37e4f0f1-8158-409b-95a0-12826bddebc2-kube-api-access-h79x8\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.747755 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.747768 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.747778 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.751219 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "33763587-13b0-4c1c-af15-3164866a25aa" (UID: "33763587-13b0-4c1c-af15-3164866a25aa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.755496 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-scripts" (OuterVolumeSpecName: "scripts") pod "33763587-13b0-4c1c-af15-3164866a25aa" (UID: "33763587-13b0-4c1c-af15-3164866a25aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.781601 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33763587-13b0-4c1c-af15-3164866a25aa-kube-api-access-zckgc" (OuterVolumeSpecName: "kube-api-access-zckgc") pod "33763587-13b0-4c1c-af15-3164866a25aa" (UID: "33763587-13b0-4c1c-af15-3164866a25aa"). InnerVolumeSpecName "kube-api-access-zckgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.791257 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12" (OuterVolumeSpecName: "glance") pod "33763587-13b0-4c1c-af15-3164866a25aa" (UID: "33763587-13b0-4c1c-af15-3164866a25aa"). InnerVolumeSpecName "pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.815592 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-config" (OuterVolumeSpecName: "config") pod "37e4f0f1-8158-409b-95a0-12826bddebc2" (UID: "37e4f0f1-8158-409b-95a0-12826bddebc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.822163 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "37e4f0f1-8158-409b-95a0-12826bddebc2" (UID: "37e4f0f1-8158-409b-95a0-12826bddebc2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.830852 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33763587-13b0-4c1c-af15-3164866a25aa" (UID: "33763587-13b0-4c1c-af15-3164866a25aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.850222 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.850272 4805 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.850290 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zckgc\" (UniqueName: \"kubernetes.io/projected/33763587-13b0-4c1c-af15-3164866a25aa-kube-api-access-zckgc\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.850301 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.850344 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") on node \"crc\" " Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.850363 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33763587-13b0-4c1c-af15-3164866a25aa-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.850378 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/37e4f0f1-8158-409b-95a0-12826bddebc2-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: E0216 21:17:31.850705 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-config-data podName:33763587-13b0-4c1c-af15-3164866a25aa nodeName:}" failed. No retries permitted until 2026-02-16 21:17:32.350673889 +0000 UTC m=+1270.169357184 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-config-data") pod "33763587-13b0-4c1c-af15-3164866a25aa" (UID: "33763587-13b0-4c1c-af15-3164866a25aa") : error deleting /var/lib/kubelet/pods/33763587-13b0-4c1c-af15-3164866a25aa/volume-subpaths: remove /var/lib/kubelet/pods/33763587-13b0-4c1c-af15-3164866a25aa/volume-subpaths: no such file or directory Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.855964 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "33763587-13b0-4c1c-af15-3164866a25aa" (UID: "33763587-13b0-4c1c-af15-3164866a25aa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.888184 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.888322 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12") on node "crc" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.952481 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:31 crc kubenswrapper[4805]: I0216 21:17:31.952521 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.139282 4805 generic.go:334] "Generic (PLEG): container finished" podID="af9beaaa-c93c-4f38-93d4-c86d4156ad44" containerID="1987ac6c330121034d1aeb641468f4aa56eaffe98de924216769859e65ddb5b1" exitCode=0 Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.142043 4805 generic.go:334] "Generic (PLEG): container finished" podID="aa73323e-8833-4118-8d6b-f6de2261b33c" containerID="f3b24048f88b0f72c516a827c497586c32e17a3bcbae6d63a088dff920ac76d0" exitCode=0 Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.144129 4805 generic.go:334] "Generic (PLEG): container finished" podID="8a5da24d-77bc-444c-a344-c811c1430ea8" containerID="1e4796f0357243266676f33412894b122e3852ff68bbbdfcce109d1d6908c0cc" exitCode=0 Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.220750 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8d5dc9954-x56z5" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.227176 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1975-account-create-update-sbmx2" event={"ID":"af9beaaa-c93c-4f38-93d4-c86d4156ad44","Type":"ContainerDied","Data":"1987ac6c330121034d1aeb641468f4aa56eaffe98de924216769859e65ddb5b1"} Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.227228 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-11b9-account-create-update-92pwx" event={"ID":"aa73323e-8833-4118-8d6b-f6de2261b33c","Type":"ContainerDied","Data":"f3b24048f88b0f72c516a827c497586c32e17a3bcbae6d63a088dff920ac76d0"} Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.227249 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r8r58" event={"ID":"8a5da24d-77bc-444c-a344-c811c1430ea8","Type":"ContainerDied","Data":"1e4796f0357243266676f33412894b122e3852ff68bbbdfcce109d1d6908c0cc"} Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.227262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerStarted","Data":"4cbd31f31d04966501ca5e1714e67320ca1a1bee08d18514b8afdcb2f2dd43f8"} Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.227286 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8d5dc9954-x56z5" event={"ID":"37e4f0f1-8158-409b-95a0-12826bddebc2","Type":"ContainerDied","Data":"fbc1cb2a207a7c02dfd363eb47219dea9ac28f33a3bba0ccbe0b055f8805a49c"} Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.227310 4805 scope.go:117] "RemoveContainer" containerID="2000ac46f12288b12f5899a913f096ea75601e47b6a656ee2c1776eba5b0c7f9" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.276048 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"33763587-13b0-4c1c-af15-3164866a25aa","Type":"ContainerDied","Data":"3e2d2c9307982c497c45b24d96291e10f9e747f4f9b057aa0106351f5b49f757"} Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.276210 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.338134 4805 generic.go:334] "Generic (PLEG): container finished" podID="dc7c489f-b7a1-4cc3-827a-3de24bd86115" containerID="8448f214ed156e900ee23963983a0f608808b4882e1b8a4183caa7b1fc178d4f" exitCode=0 Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.338613 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tt79f" event={"ID":"dc7c489f-b7a1-4cc3-827a-3de24bd86115","Type":"ContainerDied","Data":"8448f214ed156e900ee23963983a0f608808b4882e1b8a4183caa7b1fc178d4f"} Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.374042 4805 generic.go:334] "Generic (PLEG): container finished" podID="7678f4b8-da9a-4032-a853-1ec0ec5386c0" containerID="6fafffb607f4993167f2ff552cb24d75c5da7c53288dffadde8b339c867c9ce7" exitCode=0 Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.374154 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xckfh" event={"ID":"7678f4b8-da9a-4032-a853-1ec0ec5386c0","Type":"ContainerDied","Data":"6fafffb607f4993167f2ff552cb24d75c5da7c53288dffadde8b339c867c9ce7"} Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.397864 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-config-data\") pod \"33763587-13b0-4c1c-af15-3164866a25aa\" (UID: \"33763587-13b0-4c1c-af15-3164866a25aa\") " Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.403031 4805 generic.go:334] "Generic (PLEG): container finished" podID="165cd002-0510-49a6-8322-5e2fe84e99c1" containerID="472fda4e879fd280b1a3723d26755f78898ec620f2139b4cbb65f3f08e152152" exitCode=0 Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.403090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-13af-account-create-update-2jbb4" event={"ID":"165cd002-0510-49a6-8322-5e2fe84e99c1","Type":"ContainerDied","Data":"472fda4e879fd280b1a3723d26755f78898ec620f2139b4cbb65f3f08e152152"} Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.403909 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-config-data" (OuterVolumeSpecName: "config-data") pod "33763587-13b0-4c1c-af15-3164866a25aa" (UID: "33763587-13b0-4c1c-af15-3164866a25aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.501208 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33763587-13b0-4c1c-af15-3164866a25aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.716825 4805 scope.go:117] "RemoveContainer" containerID="705098690a2e635d29fb6f5dc24c44c61f6c3b4a6030bfd52b4d56c668dfd944" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.743679 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.765326 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.766563 4805 scope.go:117] "RemoveContainer" containerID="9ff579908d254c88c054d1ce4b4c1d0251e0a4fa1ca6a552c83d4f094af1a0a3" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.779088 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.815489 4805 scope.go:117] "RemoveContainer" containerID="579b13e4e98a86f06fa8ce1c9461bcbdd689cbd24875db5359c3d75ac249fae8" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.816527 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk6km\" (UniqueName: \"kubernetes.io/projected/8a5da24d-77bc-444c-a344-c811c1430ea8-kube-api-access-wk6km\") pod \"8a5da24d-77bc-444c-a344-c811c1430ea8\" (UID: \"8a5da24d-77bc-444c-a344-c811c1430ea8\") " Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.816566 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5da24d-77bc-444c-a344-c811c1430ea8-operator-scripts\") pod \"8a5da24d-77bc-444c-a344-c811c1430ea8\" (UID: \"8a5da24d-77bc-444c-a344-c811c1430ea8\") " Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.823222 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a5da24d-77bc-444c-a344-c811c1430ea8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a5da24d-77bc-444c-a344-c811c1430ea8" (UID: "8a5da24d-77bc-444c-a344-c811c1430ea8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.832122 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a5da24d-77bc-444c-a344-c811c1430ea8-kube-api-access-wk6km" (OuterVolumeSpecName: "kube-api-access-wk6km") pod "8a5da24d-77bc-444c-a344-c811c1430ea8" (UID: "8a5da24d-77bc-444c-a344-c811c1430ea8"). InnerVolumeSpecName "kube-api-access-wk6km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.837254 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:17:32 crc kubenswrapper[4805]: E0216 21:17:32.837765 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerName="neutron-api" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.837789 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerName="neutron-api" Feb 16 21:17:32 crc kubenswrapper[4805]: E0216 21:17:32.837807 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5da24d-77bc-444c-a344-c811c1430ea8" containerName="mariadb-database-create" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.837816 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5da24d-77bc-444c-a344-c811c1430ea8" containerName="mariadb-database-create" Feb 16 21:17:32 crc kubenswrapper[4805]: E0216 21:17:32.837830 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerName="neutron-httpd" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.837837 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerName="neutron-httpd" Feb 16 21:17:32 crc kubenswrapper[4805]: E0216 21:17:32.837847 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33763587-13b0-4c1c-af15-3164866a25aa" containerName="glance-httpd" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.837853 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="33763587-13b0-4c1c-af15-3164866a25aa" containerName="glance-httpd" Feb 16 21:17:32 crc kubenswrapper[4805]: E0216 21:17:32.837869 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33763587-13b0-4c1c-af15-3164866a25aa" containerName="glance-log" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.837874 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="33763587-13b0-4c1c-af15-3164866a25aa" containerName="glance-log" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.838099 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerName="neutron-api" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.838116 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a5da24d-77bc-444c-a344-c811c1430ea8" containerName="mariadb-database-create" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.838125 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="33763587-13b0-4c1c-af15-3164866a25aa" containerName="glance-log" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.838144 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="33763587-13b0-4c1c-af15-3164866a25aa" containerName="glance-httpd" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.838168 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e4f0f1-8158-409b-95a0-12826bddebc2" containerName="neutron-httpd" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.839589 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.846868 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.847218 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.848641 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.850266 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8d5dc9954-x56z5"] Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.854298 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hrrrc" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.873587 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8d5dc9954-x56z5"] Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj42r\" (UniqueName: \"kubernetes.io/projected/3bef306d-96b1-4442-a34e-b6e8aa67ec62-kube-api-access-sj42r\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923387 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923433 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923452 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bef306d-96b1-4442-a34e-b6e8aa67ec62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923585 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923602 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bef306d-96b1-4442-a34e-b6e8aa67ec62-logs\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923631 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923691 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk6km\" (UniqueName: \"kubernetes.io/projected/8a5da24d-77bc-444c-a344-c811c1430ea8-kube-api-access-wk6km\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.923701 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5da24d-77bc-444c-a344-c811c1430ea8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:32 crc kubenswrapper[4805]: I0216 21:17:32.933021 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.026018 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.026316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bef306d-96b1-4442-a34e-b6e8aa67ec62-logs\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.026453 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.026877 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bef306d-96b1-4442-a34e-b6e8aa67ec62-logs\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.032378 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.032753 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj42r\" (UniqueName: \"kubernetes.io/projected/3bef306d-96b1-4442-a34e-b6e8aa67ec62-kube-api-access-sj42r\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.032836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.033270 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.046852 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.033139 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.047108 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bef306d-96b1-4442-a34e-b6e8aa67ec62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.047746 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bef306d-96b1-4442-a34e-b6e8aa67ec62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.037288 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.047811 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/13fbba481ac34178d672430e609e409da28aa7e56b577de46c4337378ecf394e/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.048495 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.053608 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bef306d-96b1-4442-a34e-b6e8aa67ec62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.053824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj42r\" (UniqueName: \"kubernetes.io/projected/3bef306d-96b1-4442-a34e-b6e8aa67ec62-kube-api-access-sj42r\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.095152 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7795-5e6b-45fa-a0b7-58a48e82ac12\") pod \"glance-default-internal-api-0\" (UID: \"3bef306d-96b1-4442-a34e-b6e8aa67ec62\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.101351 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.459523 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerStarted","Data":"d105b11bad62b39b9dbf8963877222d2bae94c84275440b2dd8ea989f58dc880"} Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.461794 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-r8r58" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.461814 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-r8r58" event={"ID":"8a5da24d-77bc-444c-a344-c811c1430ea8","Type":"ContainerDied","Data":"006d0bf3ba21631476b76f6484d50c51c16f70dbc791e36cc86020fc3a7c1571"} Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.461867 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006d0bf3ba21631476b76f6484d50c51c16f70dbc791e36cc86020fc3a7c1571" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.463482 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f241b99d-b7d7-4897-9cfa-bd3201582861","Type":"ContainerStarted","Data":"8a096b120b2f6b458044d5cb0c61651fc6f4dc241f1bbf441480325e9d87d49a"} Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.637990 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33763587-13b0-4c1c-af15-3164866a25aa" path="/var/lib/kubelet/pods/33763587-13b0-4c1c-af15-3164866a25aa/volumes" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.639212 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e4f0f1-8158-409b-95a0-12826bddebc2" path="/var/lib/kubelet/pods/37e4f0f1-8158-409b-95a0-12826bddebc2/volumes" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.754014 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-58889dd686-zfmvp"] Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.755731 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.758828 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.759129 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rjr7h" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.762881 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.827847 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-58889dd686-zfmvp"] Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.874173 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-combined-ca-bundle\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.874342 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcc6t\" (UniqueName: \"kubernetes.io/projected/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-kube-api-access-qcc6t\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.874373 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data-custom\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.874428 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.900888 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-577d8c6b9f-9cj7p"] Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.904315 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.910388 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 16 21:17:33 crc kubenswrapper[4805]: I0216 21:17:33.937927 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.041954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.042049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-combined-ca-bundle\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.042094 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.042201 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dj55\" (UniqueName: \"kubernetes.io/projected/e29995ab-edfe-486b-bae9-a35226de0320-kube-api-access-5dj55\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.042322 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data-custom\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.042460 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcc6t\" (UniqueName: \"kubernetes.io/projected/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-kube-api-access-qcc6t\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.042510 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data-custom\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.042550 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-combined-ca-bundle\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.057377 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-combined-ca-bundle\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.061359 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data-custom\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.083872 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcc6t\" (UniqueName: \"kubernetes.io/projected/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-kube-api-access-qcc6t\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.087596 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data\") pod \"heat-engine-58889dd686-zfmvp\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.155097 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-577d8c6b9f-9cj7p"] Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.155412 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-67lql"] Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.157229 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.182268 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.182375 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-67lql"] Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.188051 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data-custom\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.188142 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.188165 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-config\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.188202 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.188237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-combined-ca-bundle\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.188255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.188320 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.188337 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbcqv\" (UniqueName: \"kubernetes.io/projected/dbf9f4f7-172d-4321-8294-dc697a17b360-kube-api-access-wbcqv\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.188379 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.189145 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dj55\" (UniqueName: \"kubernetes.io/projected/e29995ab-edfe-486b-bae9-a35226de0320-kube-api-access-5dj55\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.194457 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.204702 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-c779fd9d8-2bxwh"] Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.206911 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.209289 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.212264 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.216513 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dj55\" (UniqueName: \"kubernetes.io/projected/e29995ab-edfe-486b-bae9-a35226de0320-kube-api-access-5dj55\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.219640 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-combined-ca-bundle\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.228235 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data-custom\") pod \"heat-cfnapi-577d8c6b9f-9cj7p\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.252044 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c779fd9d8-2bxwh"] Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.292341 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjcwk\" (UniqueName: \"kubernetes.io/projected/7678f4b8-da9a-4032-a853-1ec0ec5386c0-kube-api-access-zjcwk\") pod \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\" (UID: \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.305483 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7678f4b8-da9a-4032-a853-1ec0ec5386c0-operator-scripts\") pod \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\" (UID: \"7678f4b8-da9a-4032-a853-1ec0ec5386c0\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.305924 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vmjp\" (UniqueName: \"kubernetes.io/projected/77978f8e-132a-4c91-ba44-f15707b3bedf-kube-api-access-8vmjp\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306236 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data-custom\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306353 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306376 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-config\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306507 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306575 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-combined-ca-bundle\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbcqv\" (UniqueName: \"kubernetes.io/projected/dbf9f4f7-172d-4321-8294-dc697a17b360-kube-api-access-wbcqv\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306694 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.308540 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.306626 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7678f4b8-da9a-4032-a853-1ec0ec5386c0-kube-api-access-zjcwk" (OuterVolumeSpecName: "kube-api-access-zjcwk") pod "7678f4b8-da9a-4032-a853-1ec0ec5386c0" (UID: "7678f4b8-da9a-4032-a853-1ec0ec5386c0"). InnerVolumeSpecName "kube-api-access-zjcwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.307280 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7678f4b8-da9a-4032-a853-1ec0ec5386c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7678f4b8-da9a-4032-a853-1ec0ec5386c0" (UID: "7678f4b8-da9a-4032-a853-1ec0ec5386c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.309120 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.309585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.309824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.333453 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-config\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.387900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbcqv\" (UniqueName: \"kubernetes.io/projected/dbf9f4f7-172d-4321-8294-dc697a17b360-kube-api-access-wbcqv\") pod \"dnsmasq-dns-688b9f5b49-67lql\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.413002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data-custom\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.413135 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-combined-ca-bundle\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.413171 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.413197 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vmjp\" (UniqueName: \"kubernetes.io/projected/77978f8e-132a-4c91-ba44-f15707b3bedf-kube-api-access-8vmjp\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.413318 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7678f4b8-da9a-4032-a853-1ec0ec5386c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.413337 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjcwk\" (UniqueName: \"kubernetes.io/projected/7678f4b8-da9a-4032-a853-1ec0ec5386c0-kube-api-access-zjcwk\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.423831 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-combined-ca-bundle\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.432210 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data-custom\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.432406 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.434098 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.434298 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vmjp\" (UniqueName: \"kubernetes.io/projected/77978f8e-132a-4c91-ba44-f15707b3bedf-kube-api-access-8vmjp\") pod \"heat-api-c779fd9d8-2bxwh\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.447399 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.507060 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.540095 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.541940 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.545692 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f241b99d-b7d7-4897-9cfa-bd3201582861","Type":"ContainerStarted","Data":"8b807ab3cc266e50af6b56b049375a2db7063f38674595e0d9d10d43819f2fbc"} Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.546801 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.549615 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3bef306d-96b1-4442-a34e-b6e8aa67ec62","Type":"ContainerStarted","Data":"43ffcdea6de3017a029429e8c9e4de20a7b8decdef3b7c89306ee51d1af17e63"} Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.588377 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tt79f" event={"ID":"dc7c489f-b7a1-4cc3-827a-3de24bd86115","Type":"ContainerDied","Data":"8943561148c42033ecc3da26a6e9ca4ba43e9cce4fff5357f6713d90dce50692"} Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.601819 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8943561148c42033ecc3da26a6e9ca4ba43e9cce4fff5357f6713d90dce50692" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.596426 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.590871 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tt79f" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.616505 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.616484117 podStartE2EDuration="6.616484117s" podCreationTimestamp="2026-02-16 21:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:34.611125737 +0000 UTC m=+1272.429809032" watchObservedRunningTime="2026-02-16 21:17:34.616484117 +0000 UTC m=+1272.435167412" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.617580 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa73323e-8833-4118-8d6b-f6de2261b33c-operator-scripts\") pod \"aa73323e-8833-4118-8d6b-f6de2261b33c\" (UID: \"aa73323e-8833-4118-8d6b-f6de2261b33c\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.617623 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xddj\" (UniqueName: \"kubernetes.io/projected/aa73323e-8833-4118-8d6b-f6de2261b33c-kube-api-access-9xddj\") pod \"aa73323e-8833-4118-8d6b-f6de2261b33c\" (UID: \"aa73323e-8833-4118-8d6b-f6de2261b33c\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.617672 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f46vd\" (UniqueName: \"kubernetes.io/projected/dc7c489f-b7a1-4cc3-827a-3de24bd86115-kube-api-access-f46vd\") pod \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\" (UID: \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.617841 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7c489f-b7a1-4cc3-827a-3de24bd86115-operator-scripts\") pod \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\" (UID: \"dc7c489f-b7a1-4cc3-827a-3de24bd86115\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.618926 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa73323e-8833-4118-8d6b-f6de2261b33c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa73323e-8833-4118-8d6b-f6de2261b33c" (UID: "aa73323e-8833-4118-8d6b-f6de2261b33c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.641368 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa73323e-8833-4118-8d6b-f6de2261b33c-kube-api-access-9xddj" (OuterVolumeSpecName: "kube-api-access-9xddj") pod "aa73323e-8833-4118-8d6b-f6de2261b33c" (UID: "aa73323e-8833-4118-8d6b-f6de2261b33c"). InnerVolumeSpecName "kube-api-access-9xddj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.642980 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7c489f-b7a1-4cc3-827a-3de24bd86115-kube-api-access-f46vd" (OuterVolumeSpecName: "kube-api-access-f46vd") pod "dc7c489f-b7a1-4cc3-827a-3de24bd86115" (UID: "dc7c489f-b7a1-4cc3-827a-3de24bd86115"). InnerVolumeSpecName "kube-api-access-f46vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.644015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc7c489f-b7a1-4cc3-827a-3de24bd86115-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc7c489f-b7a1-4cc3-827a-3de24bd86115" (UID: "dc7c489f-b7a1-4cc3-827a-3de24bd86115"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.654124 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xckfh" event={"ID":"7678f4b8-da9a-4032-a853-1ec0ec5386c0","Type":"ContainerDied","Data":"e1c2bdcebf3439628e116b5240543f9e6238a27fee265db7fa938e29aaa0e5d2"} Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.654166 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1c2bdcebf3439628e116b5240543f9e6238a27fee265db7fa938e29aaa0e5d2" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.654247 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xckfh" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.671066 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.686107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-11b9-account-create-update-92pwx" event={"ID":"aa73323e-8833-4118-8d6b-f6de2261b33c","Type":"ContainerDied","Data":"a536dd3fada0af5e7b46bd3cbe4a8ff19bf2c0905a7d9dd1ef46d45baa78de49"} Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.686148 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a536dd3fada0af5e7b46bd3cbe4a8ff19bf2c0905a7d9dd1ef46d45baa78de49" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.686202 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-11b9-account-create-update-92pwx" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.719968 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtm9k\" (UniqueName: \"kubernetes.io/projected/165cd002-0510-49a6-8322-5e2fe84e99c1-kube-api-access-qtm9k\") pod \"165cd002-0510-49a6-8322-5e2fe84e99c1\" (UID: \"165cd002-0510-49a6-8322-5e2fe84e99c1\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.720160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/165cd002-0510-49a6-8322-5e2fe84e99c1-operator-scripts\") pod \"165cd002-0510-49a6-8322-5e2fe84e99c1\" (UID: \"165cd002-0510-49a6-8322-5e2fe84e99c1\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.720814 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa73323e-8833-4118-8d6b-f6de2261b33c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.720834 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xddj\" (UniqueName: \"kubernetes.io/projected/aa73323e-8833-4118-8d6b-f6de2261b33c-kube-api-access-9xddj\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.720846 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f46vd\" (UniqueName: \"kubernetes.io/projected/dc7c489f-b7a1-4cc3-827a-3de24bd86115-kube-api-access-f46vd\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.720854 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7c489f-b7a1-4cc3-827a-3de24bd86115-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.725410 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165cd002-0510-49a6-8322-5e2fe84e99c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "165cd002-0510-49a6-8322-5e2fe84e99c1" (UID: "165cd002-0510-49a6-8322-5e2fe84e99c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.752882 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-13af-account-create-update-2jbb4" event={"ID":"165cd002-0510-49a6-8322-5e2fe84e99c1","Type":"ContainerDied","Data":"10e3c56556162da87a2edc4dbecb711e9d0c62274cead62968c1fb446dc2dfdf"} Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.752929 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e3c56556162da87a2edc4dbecb711e9d0c62274cead62968c1fb446dc2dfdf" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.753053 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-13af-account-create-update-2jbb4" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.775932 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165cd002-0510-49a6-8322-5e2fe84e99c1-kube-api-access-qtm9k" (OuterVolumeSpecName: "kube-api-access-qtm9k") pod "165cd002-0510-49a6-8322-5e2fe84e99c1" (UID: "165cd002-0510-49a6-8322-5e2fe84e99c1"). InnerVolumeSpecName "kube-api-access-qtm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.826609 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gzw4\" (UniqueName: \"kubernetes.io/projected/af9beaaa-c93c-4f38-93d4-c86d4156ad44-kube-api-access-2gzw4\") pod \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\" (UID: \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.827289 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af9beaaa-c93c-4f38-93d4-c86d4156ad44-operator-scripts\") pod \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\" (UID: \"af9beaaa-c93c-4f38-93d4-c86d4156ad44\") " Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.828312 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtm9k\" (UniqueName: \"kubernetes.io/projected/165cd002-0510-49a6-8322-5e2fe84e99c1-kube-api-access-qtm9k\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.828479 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/165cd002-0510-49a6-8322-5e2fe84e99c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.836873 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af9beaaa-c93c-4f38-93d4-c86d4156ad44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af9beaaa-c93c-4f38-93d4-c86d4156ad44" (UID: "af9beaaa-c93c-4f38-93d4-c86d4156ad44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.850727 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9beaaa-c93c-4f38-93d4-c86d4156ad44-kube-api-access-2gzw4" (OuterVolumeSpecName: "kube-api-access-2gzw4") pod "af9beaaa-c93c-4f38-93d4-c86d4156ad44" (UID: "af9beaaa-c93c-4f38-93d4-c86d4156ad44"). InnerVolumeSpecName "kube-api-access-2gzw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.933826 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gzw4\" (UniqueName: \"kubernetes.io/projected/af9beaaa-c93c-4f38-93d4-c86d4156ad44-kube-api-access-2gzw4\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:34 crc kubenswrapper[4805]: I0216 21:17:34.933855 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af9beaaa-c93c-4f38-93d4-c86d4156ad44-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:35 crc kubenswrapper[4805]: I0216 21:17:35.680890 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-577d8c6b9f-9cj7p"] Feb 16 21:17:35 crc kubenswrapper[4805]: I0216 21:17:35.699700 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-58889dd686-zfmvp"] Feb 16 21:17:35 crc kubenswrapper[4805]: I0216 21:17:35.892547 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-67lql"] Feb 16 21:17:35 crc kubenswrapper[4805]: I0216 21:17:35.908410 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1975-account-create-update-sbmx2" event={"ID":"af9beaaa-c93c-4f38-93d4-c86d4156ad44","Type":"ContainerDied","Data":"5f7c5295bd19ed6669d0ecaf8db53ded4c6f5cc1b18c5d143908aff1eb6d6298"} Feb 16 21:17:35 crc kubenswrapper[4805]: I0216 21:17:35.908444 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f7c5295bd19ed6669d0ecaf8db53ded4c6f5cc1b18c5d143908aff1eb6d6298" Feb 16 21:17:35 crc kubenswrapper[4805]: I0216 21:17:35.908515 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1975-account-create-update-sbmx2" Feb 16 21:17:35 crc kubenswrapper[4805]: I0216 21:17:35.942031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" event={"ID":"e29995ab-edfe-486b-bae9-a35226de0320","Type":"ContainerStarted","Data":"5ec188d56779ce2ea4983c25b4acec83901d0384ea05b6677dc403943e43651e"} Feb 16 21:17:35 crc kubenswrapper[4805]: I0216 21:17:35.974984 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58889dd686-zfmvp" event={"ID":"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f","Type":"ContainerStarted","Data":"4a124320440ce31c56676b19a2063be9fd485ea9a42940effa5493812ff72b28"} Feb 16 21:17:35 crc kubenswrapper[4805]: I0216 21:17:35.991325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3bef306d-96b1-4442-a34e-b6e8aa67ec62","Type":"ContainerStarted","Data":"6c30f38403cc3589199abf5f42bbfc50fa9f2cbe598800a4af08004db4b3fa0d"} Feb 16 21:17:36 crc kubenswrapper[4805]: I0216 21:17:36.113647 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c779fd9d8-2bxwh"] Feb 16 21:17:36 crc kubenswrapper[4805]: W0216 21:17:36.129857 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77978f8e_132a_4c91_ba44_f15707b3bedf.slice/crio-121fc60102b12208606d42993122dde8738b4a358017e91a939ee8e73dbe3dde WatchSource:0}: Error finding container 121fc60102b12208606d42993122dde8738b4a358017e91a939ee8e73dbe3dde: Status 404 returned error can't find the container with id 121fc60102b12208606d42993122dde8738b4a358017e91a939ee8e73dbe3dde Feb 16 21:17:36 crc kubenswrapper[4805]: I0216 21:17:36.868061 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.016623 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f8cfbb668-2nz5c" Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.039994 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58889dd686-zfmvp" event={"ID":"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f","Type":"ContainerStarted","Data":"be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990"} Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.040952 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.050963 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c779fd9d8-2bxwh" event={"ID":"77978f8e-132a-4c91-ba44-f15707b3bedf","Type":"ContainerStarted","Data":"121fc60102b12208606d42993122dde8738b4a358017e91a939ee8e73dbe3dde"} Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.057679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3bef306d-96b1-4442-a34e-b6e8aa67ec62","Type":"ContainerStarted","Data":"df9e7a5b03b502d26c34fe789449fdfa34bdc10ca8dea5a22925c705eff6a026"} Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.065159 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerStarted","Data":"53ff48aa78e8ee5f07e598162ba8d33a5aefa0dea3ff1c6c2650b15df4948441"} Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.066080 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="ceilometer-central-agent" containerID="cri-o://15034a1d378a1bd9a0f6d471331bd3330700330ddc6bbf347c2e3a05f642b6cc" gracePeriod=30 Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.066363 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.066402 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="proxy-httpd" containerID="cri-o://53ff48aa78e8ee5f07e598162ba8d33a5aefa0dea3ff1c6c2650b15df4948441" gracePeriod=30 Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.066448 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="sg-core" containerID="cri-o://d105b11bad62b39b9dbf8963877222d2bae94c84275440b2dd8ea989f58dc880" gracePeriod=30 Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.066477 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="ceilometer-notification-agent" containerID="cri-o://4cbd31f31d04966501ca5e1714e67320ca1a1bee08d18514b8afdcb2f2dd43f8" gracePeriod=30 Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.071302 4805 generic.go:334] "Generic (PLEG): container finished" podID="dbf9f4f7-172d-4321-8294-dc697a17b360" containerID="241d1074a5b0c4eadb147c9832904887a3f5e3d2385fa9e1ff68bd5adce179a1" exitCode=0 Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.072750 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" event={"ID":"dbf9f4f7-172d-4321-8294-dc697a17b360","Type":"ContainerDied","Data":"241d1074a5b0c4eadb147c9832904887a3f5e3d2385fa9e1ff68bd5adce179a1"} Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.072779 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" event={"ID":"dbf9f4f7-172d-4321-8294-dc697a17b360","Type":"ContainerStarted","Data":"8edac093ae3bfb06ebf73a7fd96b174bfb9f1b9b54a646ecca0763f926c8b644"} Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.159405 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-58889dd686-zfmvp" podStartSLOduration=4.15938662 podStartE2EDuration="4.15938662s" podCreationTimestamp="2026-02-16 21:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:37.155586314 +0000 UTC m=+1274.974269609" watchObservedRunningTime="2026-02-16 21:17:37.15938662 +0000 UTC m=+1274.978069915" Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.207788 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-745445cc4d-b5chv"] Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.208386 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-745445cc4d-b5chv" podUID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerName="placement-log" containerID="cri-o://7abd4a6d8d53b28c9f2baa9f0f3385d12b4e6fc2ac0992220519b15a94f3ba4c" gracePeriod=30 Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.208557 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-745445cc4d-b5chv" podUID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerName="placement-api" containerID="cri-o://849a536c17e68d02ab84de54c9406ad3e7da63f7d98a5b0728e1904ccf580c7e" gracePeriod=30 Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.262667 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.258706343 podStartE2EDuration="5.258706343s" podCreationTimestamp="2026-02-16 21:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:37.20487487 +0000 UTC m=+1275.023558165" watchObservedRunningTime="2026-02-16 21:17:37.258706343 +0000 UTC m=+1275.077389638" Feb 16 21:17:37 crc kubenswrapper[4805]: I0216 21:17:37.289208 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.399848003 podStartE2EDuration="19.289191415s" podCreationTimestamp="2026-02-16 21:17:18 +0000 UTC" firstStartedPulling="2026-02-16 21:17:19.66134282 +0000 UTC m=+1257.480026115" lastFinishedPulling="2026-02-16 21:17:36.550686232 +0000 UTC m=+1274.369369527" observedRunningTime="2026-02-16 21:17:37.27861909 +0000 UTC m=+1275.097302385" watchObservedRunningTime="2026-02-16 21:17:37.289191415 +0000 UTC m=+1275.107874710" Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.092678 4805 generic.go:334] "Generic (PLEG): container finished" podID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerID="7abd4a6d8d53b28c9f2baa9f0f3385d12b4e6fc2ac0992220519b15a94f3ba4c" exitCode=143 Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.092998 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-745445cc4d-b5chv" event={"ID":"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd","Type":"ContainerDied","Data":"7abd4a6d8d53b28c9f2baa9f0f3385d12b4e6fc2ac0992220519b15a94f3ba4c"} Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.099651 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.099697 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.099916 4805 generic.go:334] "Generic (PLEG): container finished" podID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerID="53ff48aa78e8ee5f07e598162ba8d33a5aefa0dea3ff1c6c2650b15df4948441" exitCode=0 Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.099965 4805 generic.go:334] "Generic (PLEG): container finished" podID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerID="d105b11bad62b39b9dbf8963877222d2bae94c84275440b2dd8ea989f58dc880" exitCode=2 Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.099974 4805 generic.go:334] "Generic (PLEG): container finished" podID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerID="4cbd31f31d04966501ca5e1714e67320ca1a1bee08d18514b8afdcb2f2dd43f8" exitCode=0 Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.099990 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerDied","Data":"53ff48aa78e8ee5f07e598162ba8d33a5aefa0dea3ff1c6c2650b15df4948441"} Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.100020 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerDied","Data":"d105b11bad62b39b9dbf8963877222d2bae94c84275440b2dd8ea989f58dc880"} Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.100029 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerDied","Data":"4cbd31f31d04966501ca5e1714e67320ca1a1bee08d18514b8afdcb2f2dd43f8"} Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.104361 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" event={"ID":"dbf9f4f7-172d-4321-8294-dc697a17b360","Type":"ContainerStarted","Data":"029f5def6e94a21aebf1a5a0a5eaa6411c824e627835aaac91fbec55d38f7ab4"} Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.104844 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:38 crc kubenswrapper[4805]: I0216 21:17:38.123248 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" podStartSLOduration=5.123232506 podStartE2EDuration="5.123232506s" podCreationTimestamp="2026-02-16 21:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:38.122055553 +0000 UTC m=+1275.940738848" watchObservedRunningTime="2026-02-16 21:17:38.123232506 +0000 UTC m=+1275.941915801" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.120570 4805 generic.go:334] "Generic (PLEG): container finished" podID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerID="15034a1d378a1bd9a0f6d471331bd3330700330ddc6bbf347c2e3a05f642b6cc" exitCode=0 Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.120827 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerDied","Data":"15034a1d378a1bd9a0f6d471331bd3330700330ddc6bbf347c2e3a05f642b6cc"} Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.749329 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.823069 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mhkg\" (UniqueName: \"kubernetes.io/projected/de576413-a2f4-4407-9fbe-39e5ca9b9768-kube-api-access-2mhkg\") pod \"de576413-a2f4-4407-9fbe-39e5ca9b9768\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.823165 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-sg-core-conf-yaml\") pod \"de576413-a2f4-4407-9fbe-39e5ca9b9768\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.823268 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-log-httpd\") pod \"de576413-a2f4-4407-9fbe-39e5ca9b9768\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.823351 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-config-data\") pod \"de576413-a2f4-4407-9fbe-39e5ca9b9768\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.823421 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-combined-ca-bundle\") pod \"de576413-a2f4-4407-9fbe-39e5ca9b9768\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.823512 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-run-httpd\") pod \"de576413-a2f4-4407-9fbe-39e5ca9b9768\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.823538 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-scripts\") pod \"de576413-a2f4-4407-9fbe-39e5ca9b9768\" (UID: \"de576413-a2f4-4407-9fbe-39e5ca9b9768\") " Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.823947 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "de576413-a2f4-4407-9fbe-39e5ca9b9768" (UID: "de576413-a2f4-4407-9fbe-39e5ca9b9768"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.824100 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.824211 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "de576413-a2f4-4407-9fbe-39e5ca9b9768" (UID: "de576413-a2f4-4407-9fbe-39e5ca9b9768"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.830792 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-scripts" (OuterVolumeSpecName: "scripts") pod "de576413-a2f4-4407-9fbe-39e5ca9b9768" (UID: "de576413-a2f4-4407-9fbe-39e5ca9b9768"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.832516 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de576413-a2f4-4407-9fbe-39e5ca9b9768-kube-api-access-2mhkg" (OuterVolumeSpecName: "kube-api-access-2mhkg") pod "de576413-a2f4-4407-9fbe-39e5ca9b9768" (UID: "de576413-a2f4-4407-9fbe-39e5ca9b9768"). InnerVolumeSpecName "kube-api-access-2mhkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.925806 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mhkg\" (UniqueName: \"kubernetes.io/projected/de576413-a2f4-4407-9fbe-39e5ca9b9768-kube-api-access-2mhkg\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.925847 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de576413-a2f4-4407-9fbe-39e5ca9b9768-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.925856 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:39 crc kubenswrapper[4805]: I0216 21:17:39.969934 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "de576413-a2f4-4407-9fbe-39e5ca9b9768" (UID: "de576413-a2f4-4407-9fbe-39e5ca9b9768"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.028399 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.144775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de576413-a2f4-4407-9fbe-39e5ca9b9768","Type":"ContainerDied","Data":"b315bdd1b7d83ad0c121a8863ce2bac5ccd2b50b2ff8dbc38f7b17fe32515f46"} Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.144792 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.144822 4805 scope.go:117] "RemoveContainer" containerID="53ff48aa78e8ee5f07e598162ba8d33a5aefa0dea3ff1c6c2650b15df4948441" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.180699 4805 scope.go:117] "RemoveContainer" containerID="d105b11bad62b39b9dbf8963877222d2bae94c84275440b2dd8ea989f58dc880" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.206570 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de576413-a2f4-4407-9fbe-39e5ca9b9768" (UID: "de576413-a2f4-4407-9fbe-39e5ca9b9768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.225893 4805 scope.go:117] "RemoveContainer" containerID="4cbd31f31d04966501ca5e1714e67320ca1a1bee08d18514b8afdcb2f2dd43f8" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.235430 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.285095 4805 scope.go:117] "RemoveContainer" containerID="15034a1d378a1bd9a0f6d471331bd3330700330ddc6bbf347c2e3a05f642b6cc" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.345113 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-config-data" (OuterVolumeSpecName: "config-data") pod "de576413-a2f4-4407-9fbe-39e5ca9b9768" (UID: "de576413-a2f4-4407-9fbe-39e5ca9b9768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.376556 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-86jb7"] Feb 16 21:17:40 crc kubenswrapper[4805]: E0216 21:17:40.377115 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa73323e-8833-4118-8d6b-f6de2261b33c" containerName="mariadb-account-create-update" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377132 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa73323e-8833-4118-8d6b-f6de2261b33c" containerName="mariadb-account-create-update" Feb 16 21:17:40 crc kubenswrapper[4805]: E0216 21:17:40.377158 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9beaaa-c93c-4f38-93d4-c86d4156ad44" containerName="mariadb-account-create-update" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377164 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9beaaa-c93c-4f38-93d4-c86d4156ad44" containerName="mariadb-account-create-update" Feb 16 21:17:40 crc kubenswrapper[4805]: E0216 21:17:40.377173 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="ceilometer-central-agent" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377179 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="ceilometer-central-agent" Feb 16 21:17:40 crc kubenswrapper[4805]: E0216 21:17:40.377196 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165cd002-0510-49a6-8322-5e2fe84e99c1" containerName="mariadb-account-create-update" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377201 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="165cd002-0510-49a6-8322-5e2fe84e99c1" containerName="mariadb-account-create-update" Feb 16 21:17:40 crc kubenswrapper[4805]: E0216 21:17:40.377216 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="ceilometer-notification-agent" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377225 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="ceilometer-notification-agent" Feb 16 21:17:40 crc kubenswrapper[4805]: E0216 21:17:40.377240 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="proxy-httpd" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377246 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="proxy-httpd" Feb 16 21:17:40 crc kubenswrapper[4805]: E0216 21:17:40.377264 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="sg-core" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377271 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="sg-core" Feb 16 21:17:40 crc kubenswrapper[4805]: E0216 21:17:40.377289 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7678f4b8-da9a-4032-a853-1ec0ec5386c0" containerName="mariadb-database-create" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377296 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7678f4b8-da9a-4032-a853-1ec0ec5386c0" containerName="mariadb-database-create" Feb 16 21:17:40 crc kubenswrapper[4805]: E0216 21:17:40.377308 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7c489f-b7a1-4cc3-827a-3de24bd86115" containerName="mariadb-database-create" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377315 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7c489f-b7a1-4cc3-827a-3de24bd86115" containerName="mariadb-database-create" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377517 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="proxy-httpd" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377530 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="sg-core" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377542 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7678f4b8-da9a-4032-a853-1ec0ec5386c0" containerName="mariadb-database-create" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377554 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="ceilometer-notification-agent" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377569 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" containerName="ceilometer-central-agent" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377578 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="165cd002-0510-49a6-8322-5e2fe84e99c1" containerName="mariadb-account-create-update" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377591 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9beaaa-c93c-4f38-93d4-c86d4156ad44" containerName="mariadb-account-create-update" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377597 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7c489f-b7a1-4cc3-827a-3de24bd86115" containerName="mariadb-database-create" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.377608 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa73323e-8833-4118-8d6b-f6de2261b33c" containerName="mariadb-account-create-update" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.378666 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.392290 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fbgc6" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.395152 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.395390 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.408695 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-86jb7"] Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.449371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lc7c\" (UniqueName: \"kubernetes.io/projected/697d83c1-bcef-40ab-b260-070417df0a62-kube-api-access-2lc7c\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.449551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.449596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-scripts\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.451127 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-config-data\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.451319 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de576413-a2f4-4407-9fbe-39e5ca9b9768-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.530045 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.548120 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.554196 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-config-data\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.554281 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lc7c\" (UniqueName: \"kubernetes.io/projected/697d83c1-bcef-40ab-b260-070417df0a62-kube-api-access-2lc7c\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.554319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.554338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-scripts\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.558027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-scripts\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.559233 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.560783 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.563595 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.569526 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.570029 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.574245 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-config-data\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.585704 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lc7c\" (UniqueName: \"kubernetes.io/projected/697d83c1-bcef-40ab-b260-070417df0a62-kube-api-access-2lc7c\") pod \"nova-cell0-conductor-db-sync-86jb7\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.587742 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.656386 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-scripts\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.656441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.656464 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.656555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-config-data\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.656614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-run-httpd\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.656644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2222p\" (UniqueName: \"kubernetes.io/projected/50b7f8fd-bd74-4136-b349-cb398ee9d44e-kube-api-access-2222p\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.656663 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-log-httpd\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.758472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-log-httpd\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.758584 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-scripts\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.758618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.758636 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.758791 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-config-data\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.758867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-run-httpd\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.758896 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2222p\" (UniqueName: \"kubernetes.io/projected/50b7f8fd-bd74-4136-b349-cb398ee9d44e-kube-api-access-2222p\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.759827 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-log-httpd\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.761844 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-run-httpd\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.765387 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.765895 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-scripts\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.769232 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.775208 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-config-data\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.781948 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2222p\" (UniqueName: \"kubernetes.io/projected/50b7f8fd-bd74-4136-b349-cb398ee9d44e-kube-api-access-2222p\") pod \"ceilometer-0\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " pod="openstack/ceilometer-0" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.807408 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:17:40 crc kubenswrapper[4805]: I0216 21:17:40.989236 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.177157 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" event={"ID":"e29995ab-edfe-486b-bae9-a35226de0320","Type":"ContainerStarted","Data":"74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a"} Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.177868 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.188163 4805 generic.go:334] "Generic (PLEG): container finished" podID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerID="849a536c17e68d02ab84de54c9406ad3e7da63f7d98a5b0728e1904ccf580c7e" exitCode=0 Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.188288 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-745445cc4d-b5chv" event={"ID":"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd","Type":"ContainerDied","Data":"849a536c17e68d02ab84de54c9406ad3e7da63f7d98a5b0728e1904ccf580c7e"} Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.191649 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c779fd9d8-2bxwh" event={"ID":"77978f8e-132a-4c91-ba44-f15707b3bedf","Type":"ContainerStarted","Data":"c06cec2b2b07f3102e1992bdbd48ee89ae92051112f5a7c781e0dedae9d97682"} Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.192770 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.205554 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" podStartSLOduration=4.482276287 podStartE2EDuration="8.205537413s" podCreationTimestamp="2026-02-16 21:17:33 +0000 UTC" firstStartedPulling="2026-02-16 21:17:35.670852851 +0000 UTC m=+1273.489536136" lastFinishedPulling="2026-02-16 21:17:39.394113967 +0000 UTC m=+1277.212797262" observedRunningTime="2026-02-16 21:17:41.204949297 +0000 UTC m=+1279.023632592" watchObservedRunningTime="2026-02-16 21:17:41.205537413 +0000 UTC m=+1279.024220708" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.247522 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-c779fd9d8-2bxwh" podStartSLOduration=4.932832239 podStartE2EDuration="8.247504075s" podCreationTimestamp="2026-02-16 21:17:33 +0000 UTC" firstStartedPulling="2026-02-16 21:17:36.152752108 +0000 UTC m=+1273.971435403" lastFinishedPulling="2026-02-16 21:17:39.467423944 +0000 UTC m=+1277.286107239" observedRunningTime="2026-02-16 21:17:41.228287028 +0000 UTC m=+1279.046970323" watchObservedRunningTime="2026-02-16 21:17:41.247504075 +0000 UTC m=+1279.066187370" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.429840 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-86jb7"] Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.639230 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de576413-a2f4-4407-9fbe-39e5ca9b9768" path="/var/lib/kubelet/pods/de576413-a2f4-4407-9fbe-39e5ca9b9768/volumes" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.810914 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.901299 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-scripts\") pod \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.902415 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dv67\" (UniqueName: \"kubernetes.io/projected/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-kube-api-access-7dv67\") pod \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.902463 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-internal-tls-certs\") pod \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.902524 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-config-data\") pod \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.902660 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-logs\") pod \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.902693 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-public-tls-certs\") pod \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.904163 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-logs" (OuterVolumeSpecName: "logs") pod "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" (UID: "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.902860 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-combined-ca-bundle\") pod \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\" (UID: \"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd\") " Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.905369 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.910092 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-scripts" (OuterVolumeSpecName: "scripts") pod "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" (UID: "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.911167 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-kube-api-access-7dv67" (OuterVolumeSpecName: "kube-api-access-7dv67") pod "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" (UID: "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd"). InnerVolumeSpecName "kube-api-access-7dv67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:41 crc kubenswrapper[4805]: I0216 21:17:41.946785 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.001872 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" (UID: "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.018516 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.018564 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.018576 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dv67\" (UniqueName: \"kubernetes.io/projected/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-kube-api-access-7dv67\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.063831 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-config-data" (OuterVolumeSpecName: "config-data") pod "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" (UID: "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.067820 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" (UID: "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.111645 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" (UID: "381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.121256 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.121324 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.121345 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.201819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerStarted","Data":"f891e52f0bfdcff739ca76960893031276c9c57f72698b3ad08bfeb94f2e20b7"} Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.207181 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-745445cc4d-b5chv" event={"ID":"381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd","Type":"ContainerDied","Data":"38232e8be4915c57abe56e232b464989cc2ee720a033ccfef86fa27bb319e931"} Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.207214 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-745445cc4d-b5chv" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.207235 4805 scope.go:117] "RemoveContainer" containerID="849a536c17e68d02ab84de54c9406ad3e7da63f7d98a5b0728e1904ccf580c7e" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.213841 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-86jb7" event={"ID":"697d83c1-bcef-40ab-b260-070417df0a62","Type":"ContainerStarted","Data":"21fde207bbc0e2160932fd610c58d471c43a9e84631d74ee2347e91eb07e847a"} Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.281464 4805 scope.go:117] "RemoveContainer" containerID="7abd4a6d8d53b28c9f2baa9f0f3385d12b4e6fc2ac0992220519b15a94f3ba4c" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.286846 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-745445cc4d-b5chv"] Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.320374 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-745445cc4d-b5chv"] Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.348953 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-55c677d475-j7xgs"] Feb 16 21:17:42 crc kubenswrapper[4805]: E0216 21:17:42.349530 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerName="placement-api" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.349546 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerName="placement-api" Feb 16 21:17:42 crc kubenswrapper[4805]: E0216 21:17:42.349588 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerName="placement-log" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.349596 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerName="placement-log" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.349823 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerName="placement-log" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.349837 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" containerName="placement-api" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.350676 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.372950 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-55c677d475-j7xgs"] Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.413465 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-89694dcd7-8mhwc"] Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.415483 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.425717 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-744875bc86-hlx4w"] Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.428273 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0af068-e25b-4fd8-aa7b-9898e0341869-config-data\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.428349 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kdx7\" (UniqueName: \"kubernetes.io/projected/3f0af068-e25b-4fd8-aa7b-9898e0341869-kube-api-access-6kdx7\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.428386 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0af068-e25b-4fd8-aa7b-9898e0341869-combined-ca-bundle\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.428462 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f0af068-e25b-4fd8-aa7b-9898e0341869-config-data-custom\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.433795 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.441404 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-89694dcd7-8mhwc"] Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.458676 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-744875bc86-hlx4w"] Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.531464 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.531989 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f0af068-e25b-4fd8-aa7b-9898e0341869-config-data-custom\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.532048 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98wt9\" (UniqueName: \"kubernetes.io/projected/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-kube-api-access-98wt9\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.532130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.532161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-combined-ca-bundle\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.532203 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-combined-ca-bundle\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.532231 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data-custom\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.532421 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0af068-e25b-4fd8-aa7b-9898e0341869-config-data\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.532585 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2kh\" (UniqueName: \"kubernetes.io/projected/38227f5c-7618-4f26-a240-857ca856b32e-kube-api-access-dm2kh\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.532768 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data-custom\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.532873 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kdx7\" (UniqueName: \"kubernetes.io/projected/3f0af068-e25b-4fd8-aa7b-9898e0341869-kube-api-access-6kdx7\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.533049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0af068-e25b-4fd8-aa7b-9898e0341869-combined-ca-bundle\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.545713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0af068-e25b-4fd8-aa7b-9898e0341869-combined-ca-bundle\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.546615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0af068-e25b-4fd8-aa7b-9898e0341869-config-data\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.549470 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kdx7\" (UniqueName: \"kubernetes.io/projected/3f0af068-e25b-4fd8-aa7b-9898e0341869-kube-api-access-6kdx7\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.553196 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f0af068-e25b-4fd8-aa7b-9898e0341869-config-data-custom\") pod \"heat-engine-55c677d475-j7xgs\" (UID: \"3f0af068-e25b-4fd8-aa7b-9898e0341869\") " pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.634972 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data-custom\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.635105 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.635135 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98wt9\" (UniqueName: \"kubernetes.io/projected/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-kube-api-access-98wt9\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.635174 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.635199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-combined-ca-bundle\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.635218 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-combined-ca-bundle\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.635239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data-custom\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.635285 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2kh\" (UniqueName: \"kubernetes.io/projected/38227f5c-7618-4f26-a240-857ca856b32e-kube-api-access-dm2kh\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.645068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data-custom\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.646450 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data-custom\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.650611 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.655933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.666009 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-combined-ca-bundle\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.666594 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-combined-ca-bundle\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.673657 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98wt9\" (UniqueName: \"kubernetes.io/projected/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-kube-api-access-98wt9\") pod \"heat-cfnapi-744875bc86-hlx4w\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.681222 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.697477 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2kh\" (UniqueName: \"kubernetes.io/projected/38227f5c-7618-4f26-a240-857ca856b32e-kube-api-access-dm2kh\") pod \"heat-api-89694dcd7-8mhwc\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.768881 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:42 crc kubenswrapper[4805]: I0216 21:17:42.787166 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.102303 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.102888 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.163001 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.183135 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.232351 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerStarted","Data":"60342da06db413beefe1f64c05609d8868547073f8d629259fffe78155b1d28f"} Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.242393 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.242430 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.407975 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 21:17:43 crc kubenswrapper[4805]: W0216 21:17:43.519178 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5beda969_4d8e_4f58_8ea5_8249f1d6a77b.slice/crio-ef14d7d5b9d9e4e118e1b9282e186bf2f6754086278b9070a48ee83c09cfdc6a WatchSource:0}: Error finding container ef14d7d5b9d9e4e118e1b9282e186bf2f6754086278b9070a48ee83c09cfdc6a: Status 404 returned error can't find the container with id ef14d7d5b9d9e4e118e1b9282e186bf2f6754086278b9070a48ee83c09cfdc6a Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.568923 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-744875bc86-hlx4w"] Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.698511 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd" path="/var/lib/kubelet/pods/381367ab-4da1-4ed6-bdcb-9d68c4f7e4dd/volumes" Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.709751 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-55c677d475-j7xgs"] Feb 16 21:17:43 crc kubenswrapper[4805]: I0216 21:17:43.786541 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-89694dcd7-8mhwc"] Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.263881 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-744875bc86-hlx4w" event={"ID":"5beda969-4d8e-4f58-8ea5-8249f1d6a77b","Type":"ContainerStarted","Data":"4f22936185d74c7cae80d5548ed37bce7b5d4ed388d3ba0eb6628f2bd3a7e743"} Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.263941 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-744875bc86-hlx4w" event={"ID":"5beda969-4d8e-4f58-8ea5-8249f1d6a77b","Type":"ContainerStarted","Data":"ef14d7d5b9d9e4e118e1b9282e186bf2f6754086278b9070a48ee83c09cfdc6a"} Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.265949 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.274609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55c677d475-j7xgs" event={"ID":"3f0af068-e25b-4fd8-aa7b-9898e0341869","Type":"ContainerStarted","Data":"e3b2afa24aa7bde26eab39e9e1ff830ae944c546386f0fdb7e7f0903336bfa3d"} Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.274654 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55c677d475-j7xgs" event={"ID":"3f0af068-e25b-4fd8-aa7b-9898e0341869","Type":"ContainerStarted","Data":"7ffaf4265e5143f5b23b38da61890f714255a04beecc4c129ab02c06f45d6364"} Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.274847 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.297307 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-744875bc86-hlx4w" podStartSLOduration=2.297286544 podStartE2EDuration="2.297286544s" podCreationTimestamp="2026-02-16 21:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:44.283993303 +0000 UTC m=+1282.102676598" watchObservedRunningTime="2026-02-16 21:17:44.297286544 +0000 UTC m=+1282.115969839" Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.311520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-89694dcd7-8mhwc" event={"ID":"38227f5c-7618-4f26-a240-857ca856b32e","Type":"ContainerStarted","Data":"7fd181ac3f7d111199d5776a423bf44e51434b8917c825b5ce3f64f5637cd4ec"} Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.311913 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-89694dcd7-8mhwc" event={"ID":"38227f5c-7618-4f26-a240-857ca856b32e","Type":"ContainerStarted","Data":"c3505ac92734a16d51e74e99f51397ae385fb285c29ab2caeb103231f08ea57d"} Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.313038 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.321446 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-55c677d475-j7xgs" podStartSLOduration=2.321428929 podStartE2EDuration="2.321428929s" podCreationTimestamp="2026-02-16 21:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:44.320126933 +0000 UTC m=+1282.138810228" watchObservedRunningTime="2026-02-16 21:17:44.321428929 +0000 UTC m=+1282.140112224" Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.325313 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerStarted","Data":"ed8e187b334193b84809d619bf069b193c21e8996edd6dcb9e2d22504245fe24"} Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.361939 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-89694dcd7-8mhwc" podStartSLOduration=2.36191945 podStartE2EDuration="2.36191945s" podCreationTimestamp="2026-02-16 21:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:44.339110982 +0000 UTC m=+1282.157794287" watchObservedRunningTime="2026-02-16 21:17:44.36191945 +0000 UTC m=+1282.180602745" Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.542353 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.637684 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxl5d"] Feb 16 21:17:44 crc kubenswrapper[4805]: I0216 21:17:44.638625 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" podUID="82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" containerName="dnsmasq-dns" containerID="cri-o://9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a" gracePeriod=10 Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.316038 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.359056 4805 generic.go:334] "Generic (PLEG): container finished" podID="82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" containerID="9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a" exitCode=0 Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.359142 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" event={"ID":"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7","Type":"ContainerDied","Data":"9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a"} Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.359169 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" event={"ID":"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7","Type":"ContainerDied","Data":"8c1a2e63d3fcdb8566f19a67eeb30429df620a721579ef632e0fe347189379fb"} Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.359188 4805 scope.go:117] "RemoveContainer" containerID="9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.359249 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pxl5d" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.363562 4805 generic.go:334] "Generic (PLEG): container finished" podID="38227f5c-7618-4f26-a240-857ca856b32e" containerID="7fd181ac3f7d111199d5776a423bf44e51434b8917c825b5ce3f64f5637cd4ec" exitCode=1 Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.363616 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-89694dcd7-8mhwc" event={"ID":"38227f5c-7618-4f26-a240-857ca856b32e","Type":"ContainerDied","Data":"7fd181ac3f7d111199d5776a423bf44e51434b8917c825b5ce3f64f5637cd4ec"} Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.364394 4805 scope.go:117] "RemoveContainer" containerID="7fd181ac3f7d111199d5776a423bf44e51434b8917c825b5ce3f64f5637cd4ec" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.372009 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerStarted","Data":"2942df01f24d98b99a05fc122a9133788aa5a1723f3fa3dbabdece6b7b2a8416"} Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.384050 4805 generic.go:334] "Generic (PLEG): container finished" podID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" containerID="4f22936185d74c7cae80d5548ed37bce7b5d4ed388d3ba0eb6628f2bd3a7e743" exitCode=1 Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.385439 4805 scope.go:117] "RemoveContainer" containerID="4f22936185d74c7cae80d5548ed37bce7b5d4ed388d3ba0eb6628f2bd3a7e743" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.385806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-744875bc86-hlx4w" event={"ID":"5beda969-4d8e-4f58-8ea5-8249f1d6a77b","Type":"ContainerDied","Data":"4f22936185d74c7cae80d5548ed37bce7b5d4ed388d3ba0eb6628f2bd3a7e743"} Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.385868 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.385877 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.465937 4805 scope.go:117] "RemoveContainer" containerID="45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.482422 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4hj6\" (UniqueName: \"kubernetes.io/projected/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-kube-api-access-w4hj6\") pod \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.482518 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-config\") pod \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.482539 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-sb\") pod \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.482568 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-swift-storage-0\") pod \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.482640 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-nb\") pod \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.482662 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-svc\") pod \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\" (UID: \"82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7\") " Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.513466 4805 scope.go:117] "RemoveContainer" containerID="9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.513976 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-kube-api-access-w4hj6" (OuterVolumeSpecName: "kube-api-access-w4hj6") pod "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" (UID: "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7"). InnerVolumeSpecName "kube-api-access-w4hj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:45 crc kubenswrapper[4805]: E0216 21:17:45.539848 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a\": container with ID starting with 9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a not found: ID does not exist" containerID="9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.539898 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a"} err="failed to get container status \"9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a\": rpc error: code = NotFound desc = could not find container \"9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a\": container with ID starting with 9342837d7f0564d551c33f5e8cf937f85c1a979a1ebca9ae8e2623ff9a67b21a not found: ID does not exist" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.539923 4805 scope.go:117] "RemoveContainer" containerID="45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd" Feb 16 21:17:45 crc kubenswrapper[4805]: E0216 21:17:45.543861 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd\": container with ID starting with 45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd not found: ID does not exist" containerID="45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.543911 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd"} err="failed to get container status \"45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd\": rpc error: code = NotFound desc = could not find container \"45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd\": container with ID starting with 45d9ec231aabf617af0f266cf86732671a42801e82fde3ab96d299fbcd4347dd not found: ID does not exist" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.574183 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" (UID: "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.574582 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-config" (OuterVolumeSpecName: "config") pod "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" (UID: "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.589269 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.589297 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.589309 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4hj6\" (UniqueName: \"kubernetes.io/projected/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-kube-api-access-w4hj6\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.590182 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" (UID: "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.648142 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" (UID: "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.689346 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" (UID: "82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.692591 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.692624 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:45 crc kubenswrapper[4805]: I0216 21:17:45.692632 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.006704 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.012934 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxl5d"] Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.026898 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxl5d"] Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.156647 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.297837 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-c779fd9d8-2bxwh"] Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.298287 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-c779fd9d8-2bxwh" podUID="77978f8e-132a-4c91-ba44-f15707b3bedf" containerName="heat-api" containerID="cri-o://c06cec2b2b07f3102e1992bdbd48ee89ae92051112f5a7c781e0dedae9d97682" gracePeriod=60 Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.314020 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-577d8c6b9f-9cj7p"] Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.314258 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" podUID="e29995ab-edfe-486b-bae9-a35226de0320" containerName="heat-cfnapi" containerID="cri-o://74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a" gracePeriod=60 Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.332125 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7587bc9c56-x54w7"] Feb 16 21:17:46 crc kubenswrapper[4805]: E0216 21:17:46.333701 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" containerName="dnsmasq-dns" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.333818 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" containerName="dnsmasq-dns" Feb 16 21:17:46 crc kubenswrapper[4805]: E0216 21:17:46.333912 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" containerName="init" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.334035 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" containerName="init" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.334358 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" containerName="dnsmasq-dns" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.335377 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.335941 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-c779fd9d8-2bxwh" podUID="77978f8e-132a-4c91-ba44-f15707b3bedf" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.222:8004/healthcheck\": EOF" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.344181 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.344202 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.353376 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7587bc9c56-x54w7"] Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.371513 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-748d64cf47-dqzh6"] Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.373960 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.382812 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-748d64cf47-dqzh6"] Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.388679 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.388898 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.411287 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-config-data\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.411638 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-public-tls-certs\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.411794 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh96n\" (UniqueName: \"kubernetes.io/projected/973fa704-45c8-4ebf-8517-ea1c878cfce9-kube-api-access-dh96n\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.411943 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-public-tls-certs\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.412434 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-internal-tls-certs\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.412567 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-config-data\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.412687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-config-data-custom\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.412827 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw9ql\" (UniqueName: \"kubernetes.io/projected/12fe6368-7dd3-443c-a135-328753625d21-kube-api-access-tw9ql\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.416963 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-combined-ca-bundle\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.417167 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-combined-ca-bundle\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.417285 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-config-data-custom\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.417444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-internal-tls-certs\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.428797 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-89694dcd7-8mhwc" event={"ID":"38227f5c-7618-4f26-a240-857ca856b32e","Type":"ContainerStarted","Data":"c39aaad9fd528ef92218e00c6d135e5182bb40208c8b83a5dbb238797e154957"} Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.429677 4805 scope.go:117] "RemoveContainer" containerID="c39aaad9fd528ef92218e00c6d135e5182bb40208c8b83a5dbb238797e154957" Feb 16 21:17:46 crc kubenswrapper[4805]: E0216 21:17:46.430006 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-89694dcd7-8mhwc_openstack(38227f5c-7618-4f26-a240-857ca856b32e)\"" pod="openstack/heat-api-89694dcd7-8mhwc" podUID="38227f5c-7618-4f26-a240-857ca856b32e" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.432352 4805 generic.go:334] "Generic (PLEG): container finished" podID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" containerID="4970e47908ff879e23efb54093422d058edcf556b50c85fc90f506c254ad5838" exitCode=1 Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.432526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-744875bc86-hlx4w" event={"ID":"5beda969-4d8e-4f58-8ea5-8249f1d6a77b","Type":"ContainerDied","Data":"4970e47908ff879e23efb54093422d058edcf556b50c85fc90f506c254ad5838"} Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.432652 4805 scope.go:117] "RemoveContainer" containerID="4f22936185d74c7cae80d5548ed37bce7b5d4ed388d3ba0eb6628f2bd3a7e743" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.433126 4805 scope.go:117] "RemoveContainer" containerID="4970e47908ff879e23efb54093422d058edcf556b50c85fc90f506c254ad5838" Feb 16 21:17:46 crc kubenswrapper[4805]: E0216 21:17:46.433415 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-744875bc86-hlx4w_openstack(5beda969-4d8e-4f58-8ea5-8249f1d6a77b)\"" pod="openstack/heat-cfnapi-744875bc86-hlx4w" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.525187 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-combined-ca-bundle\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.525750 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-combined-ca-bundle\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.525886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-config-data-custom\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.527027 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-internal-tls-certs\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.527358 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-config-data\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.527378 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-public-tls-certs\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.527425 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh96n\" (UniqueName: \"kubernetes.io/projected/973fa704-45c8-4ebf-8517-ea1c878cfce9-kube-api-access-dh96n\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.527496 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-public-tls-certs\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.527541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-internal-tls-certs\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.527643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-config-data\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.527714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-config-data-custom\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.527799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw9ql\" (UniqueName: \"kubernetes.io/projected/12fe6368-7dd3-443c-a135-328753625d21-kube-api-access-tw9ql\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.535652 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-config-data\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.536241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-internal-tls-certs\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.536657 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-config-data-custom\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.552443 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-combined-ca-bundle\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.554543 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-public-tls-certs\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.556265 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-public-tls-certs\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.556928 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-config-data-custom\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.557284 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw9ql\" (UniqueName: \"kubernetes.io/projected/12fe6368-7dd3-443c-a135-328753625d21-kube-api-access-tw9ql\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.557346 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-combined-ca-bundle\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.573602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh96n\" (UniqueName: \"kubernetes.io/projected/973fa704-45c8-4ebf-8517-ea1c878cfce9-kube-api-access-dh96n\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.574454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/973fa704-45c8-4ebf-8517-ea1c878cfce9-internal-tls-certs\") pod \"heat-cfnapi-748d64cf47-dqzh6\" (UID: \"973fa704-45c8-4ebf-8517-ea1c878cfce9\") " pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.577377 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12fe6368-7dd3-443c-a135-328753625d21-config-data\") pod \"heat-api-7587bc9c56-x54w7\" (UID: \"12fe6368-7dd3-443c-a135-328753625d21\") " pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.660530 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.677195 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:46 crc kubenswrapper[4805]: I0216 21:17:46.775646 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" podUID="e29995ab-edfe-486b-bae9-a35226de0320" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.220:8000/healthcheck\": read tcp 10.217.0.2:35906->10.217.0.220:8000: read: connection reset by peer" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.226203 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7587bc9c56-x54w7"] Feb 16 21:17:47 crc kubenswrapper[4805]: W0216 21:17:47.252435 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12fe6368_7dd3_443c_a135_328753625d21.slice/crio-4c9b7b77b71698c140edaef3f970fd9bc405cd2bef5083c649995e50df795d0d WatchSource:0}: Error finding container 4c9b7b77b71698c140edaef3f970fd9bc405cd2bef5083c649995e50df795d0d: Status 404 returned error can't find the container with id 4c9b7b77b71698c140edaef3f970fd9bc405cd2bef5083c649995e50df795d0d Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.320146 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.384646 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-748d64cf47-dqzh6"] Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.456500 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dj55\" (UniqueName: \"kubernetes.io/projected/e29995ab-edfe-486b-bae9-a35226de0320-kube-api-access-5dj55\") pod \"e29995ab-edfe-486b-bae9-a35226de0320\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.456711 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data\") pod \"e29995ab-edfe-486b-bae9-a35226de0320\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.456841 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-combined-ca-bundle\") pod \"e29995ab-edfe-486b-bae9-a35226de0320\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.457262 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data-custom\") pod \"e29995ab-edfe-486b-bae9-a35226de0320\" (UID: \"e29995ab-edfe-486b-bae9-a35226de0320\") " Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.464839 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e29995ab-edfe-486b-bae9-a35226de0320" (UID: "e29995ab-edfe-486b-bae9-a35226de0320"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.469139 4805 scope.go:117] "RemoveContainer" containerID="4970e47908ff879e23efb54093422d058edcf556b50c85fc90f506c254ad5838" Feb 16 21:17:47 crc kubenswrapper[4805]: E0216 21:17:47.469480 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-744875bc86-hlx4w_openstack(5beda969-4d8e-4f58-8ea5-8249f1d6a77b)\"" pod="openstack/heat-cfnapi-744875bc86-hlx4w" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.470351 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-748d64cf47-dqzh6" event={"ID":"973fa704-45c8-4ebf-8517-ea1c878cfce9","Type":"ContainerStarted","Data":"c886675f2fcf8bf083bacd9b2e9d25c69b28ef42d28834364184116e761224d5"} Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.478465 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7587bc9c56-x54w7" event={"ID":"12fe6368-7dd3-443c-a135-328753625d21","Type":"ContainerStarted","Data":"4c9b7b77b71698c140edaef3f970fd9bc405cd2bef5083c649995e50df795d0d"} Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.478700 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29995ab-edfe-486b-bae9-a35226de0320-kube-api-access-5dj55" (OuterVolumeSpecName: "kube-api-access-5dj55") pod "e29995ab-edfe-486b-bae9-a35226de0320" (UID: "e29995ab-edfe-486b-bae9-a35226de0320"). InnerVolumeSpecName "kube-api-access-5dj55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.496037 4805 generic.go:334] "Generic (PLEG): container finished" podID="38227f5c-7618-4f26-a240-857ca856b32e" containerID="c39aaad9fd528ef92218e00c6d135e5182bb40208c8b83a5dbb238797e154957" exitCode=1 Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.496124 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-89694dcd7-8mhwc" event={"ID":"38227f5c-7618-4f26-a240-857ca856b32e","Type":"ContainerDied","Data":"c39aaad9fd528ef92218e00c6d135e5182bb40208c8b83a5dbb238797e154957"} Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.496154 4805 scope.go:117] "RemoveContainer" containerID="7fd181ac3f7d111199d5776a423bf44e51434b8917c825b5ce3f64f5637cd4ec" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.496854 4805 scope.go:117] "RemoveContainer" containerID="c39aaad9fd528ef92218e00c6d135e5182bb40208c8b83a5dbb238797e154957" Feb 16 21:17:47 crc kubenswrapper[4805]: E0216 21:17:47.497125 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-89694dcd7-8mhwc_openstack(38227f5c-7618-4f26-a240-857ca856b32e)\"" pod="openstack/heat-api-89694dcd7-8mhwc" podUID="38227f5c-7618-4f26-a240-857ca856b32e" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.508407 4805 generic.go:334] "Generic (PLEG): container finished" podID="e29995ab-edfe-486b-bae9-a35226de0320" containerID="74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a" exitCode=0 Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.508544 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" event={"ID":"e29995ab-edfe-486b-bae9-a35226de0320","Type":"ContainerDied","Data":"74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a"} Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.508590 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" event={"ID":"e29995ab-edfe-486b-bae9-a35226de0320","Type":"ContainerDied","Data":"5ec188d56779ce2ea4983c25b4acec83901d0384ea05b6677dc403943e43651e"} Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.508518 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-577d8c6b9f-9cj7p" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.545609 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e29995ab-edfe-486b-bae9-a35226de0320" (UID: "e29995ab-edfe-486b-bae9-a35226de0320"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.563545 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.563587 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dj55\" (UniqueName: \"kubernetes.io/projected/e29995ab-edfe-486b-bae9-a35226de0320-kube-api-access-5dj55\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.563600 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.638632 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7" path="/var/lib/kubelet/pods/82c4a0ac-984b-4dd6-b70f-8c9ddbdf53d7/volumes" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.660661 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data" (OuterVolumeSpecName: "config-data") pod "e29995ab-edfe-486b-bae9-a35226de0320" (UID: "e29995ab-edfe-486b-bae9-a35226de0320"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.666446 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29995ab-edfe-486b-bae9-a35226de0320-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.710469 4805 scope.go:117] "RemoveContainer" containerID="74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.746665 4805 scope.go:117] "RemoveContainer" containerID="74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a" Feb 16 21:17:47 crc kubenswrapper[4805]: E0216 21:17:47.747326 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a\": container with ID starting with 74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a not found: ID does not exist" containerID="74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.747372 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a"} err="failed to get container status \"74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a\": rpc error: code = NotFound desc = could not find container \"74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a\": container with ID starting with 74605bb8df62574b1bb0468cb65b56e2f907a2768ffde064c7796785503b817a not found: ID does not exist" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.770378 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.770443 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.788265 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.788305 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.865202 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-577d8c6b9f-9cj7p"] Feb 16 21:17:47 crc kubenswrapper[4805]: I0216 21:17:47.884901 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-577d8c6b9f-9cj7p"] Feb 16 21:17:48 crc kubenswrapper[4805]: I0216 21:17:48.528154 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-748d64cf47-dqzh6" event={"ID":"973fa704-45c8-4ebf-8517-ea1c878cfce9","Type":"ContainerStarted","Data":"ac31672a3c8e64422a7d20fc9415f40404eccaff8b38882269b76bd4906991e5"} Feb 16 21:17:48 crc kubenswrapper[4805]: I0216 21:17:48.529925 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:48 crc kubenswrapper[4805]: I0216 21:17:48.539339 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7587bc9c56-x54w7" event={"ID":"12fe6368-7dd3-443c-a135-328753625d21","Type":"ContainerStarted","Data":"627885f0db3511753fcc3716f12dd23eb6fbf3d64dece90bebd000f15e3267de"} Feb 16 21:17:48 crc kubenswrapper[4805]: I0216 21:17:48.545277 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:48 crc kubenswrapper[4805]: I0216 21:17:48.556804 4805 scope.go:117] "RemoveContainer" containerID="c39aaad9fd528ef92218e00c6d135e5182bb40208c8b83a5dbb238797e154957" Feb 16 21:17:48 crc kubenswrapper[4805]: E0216 21:17:48.557029 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-89694dcd7-8mhwc_openstack(38227f5c-7618-4f26-a240-857ca856b32e)\"" pod="openstack/heat-api-89694dcd7-8mhwc" podUID="38227f5c-7618-4f26-a240-857ca856b32e" Feb 16 21:17:48 crc kubenswrapper[4805]: I0216 21:17:48.561993 4805 scope.go:117] "RemoveContainer" containerID="4970e47908ff879e23efb54093422d058edcf556b50c85fc90f506c254ad5838" Feb 16 21:17:48 crc kubenswrapper[4805]: E0216 21:17:48.562229 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-744875bc86-hlx4w_openstack(5beda969-4d8e-4f58-8ea5-8249f1d6a77b)\"" pod="openstack/heat-cfnapi-744875bc86-hlx4w" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" Feb 16 21:17:48 crc kubenswrapper[4805]: I0216 21:17:48.604296 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-748d64cf47-dqzh6" podStartSLOduration=2.6042570400000002 podStartE2EDuration="2.60425704s" podCreationTimestamp="2026-02-16 21:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:48.56304376 +0000 UTC m=+1286.381727055" watchObservedRunningTime="2026-02-16 21:17:48.60425704 +0000 UTC m=+1286.422940335" Feb 16 21:17:48 crc kubenswrapper[4805]: I0216 21:17:48.617236 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7587bc9c56-x54w7" podStartSLOduration=2.617215243 podStartE2EDuration="2.617215243s" podCreationTimestamp="2026-02-16 21:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:17:48.586031881 +0000 UTC m=+1286.404715176" watchObservedRunningTime="2026-02-16 21:17:48.617215243 +0000 UTC m=+1286.435898538" Feb 16 21:17:49 crc kubenswrapper[4805]: I0216 21:17:49.572223 4805 scope.go:117] "RemoveContainer" containerID="c39aaad9fd528ef92218e00c6d135e5182bb40208c8b83a5dbb238797e154957" Feb 16 21:17:49 crc kubenswrapper[4805]: I0216 21:17:49.572858 4805 scope.go:117] "RemoveContainer" containerID="4970e47908ff879e23efb54093422d058edcf556b50c85fc90f506c254ad5838" Feb 16 21:17:49 crc kubenswrapper[4805]: E0216 21:17:49.573132 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-744875bc86-hlx4w_openstack(5beda969-4d8e-4f58-8ea5-8249f1d6a77b)\"" pod="openstack/heat-cfnapi-744875bc86-hlx4w" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" Feb 16 21:17:49 crc kubenswrapper[4805]: E0216 21:17:49.573159 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-89694dcd7-8mhwc_openstack(38227f5c-7618-4f26-a240-857ca856b32e)\"" pod="openstack/heat-api-89694dcd7-8mhwc" podUID="38227f5c-7618-4f26-a240-857ca856b32e" Feb 16 21:17:49 crc kubenswrapper[4805]: I0216 21:17:49.612342 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29995ab-edfe-486b-bae9-a35226de0320" path="/var/lib/kubelet/pods/e29995ab-edfe-486b-bae9-a35226de0320/volumes" Feb 16 21:17:49 crc kubenswrapper[4805]: I0216 21:17:49.884895 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:51 crc kubenswrapper[4805]: I0216 21:17:51.731216 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-c779fd9d8-2bxwh" podUID="77978f8e-132a-4c91-ba44-f15707b3bedf" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.222:8004/healthcheck\": read tcp 10.217.0.2:56020->10.217.0.222:8004: read: connection reset by peer" Feb 16 21:17:52 crc kubenswrapper[4805]: I0216 21:17:52.610235 4805 generic.go:334] "Generic (PLEG): container finished" podID="77978f8e-132a-4c91-ba44-f15707b3bedf" containerID="c06cec2b2b07f3102e1992bdbd48ee89ae92051112f5a7c781e0dedae9d97682" exitCode=0 Feb 16 21:17:52 crc kubenswrapper[4805]: I0216 21:17:52.610378 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c779fd9d8-2bxwh" event={"ID":"77978f8e-132a-4c91-ba44-f15707b3bedf","Type":"ContainerDied","Data":"c06cec2b2b07f3102e1992bdbd48ee89ae92051112f5a7c781e0dedae9d97682"} Feb 16 21:17:54 crc kubenswrapper[4805]: I0216 21:17:54.235170 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:17:54 crc kubenswrapper[4805]: I0216 21:17:54.597355 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-c779fd9d8-2bxwh" podUID="77978f8e-132a-4c91-ba44-f15707b3bedf" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.222:8004/healthcheck\": dial tcp 10.217.0.222:8004: connect: connection refused" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.053465 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.060116 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data-custom\") pod \"77978f8e-132a-4c91-ba44-f15707b3bedf\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.060485 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data\") pod \"77978f8e-132a-4c91-ba44-f15707b3bedf\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.060683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vmjp\" (UniqueName: \"kubernetes.io/projected/77978f8e-132a-4c91-ba44-f15707b3bedf-kube-api-access-8vmjp\") pod \"77978f8e-132a-4c91-ba44-f15707b3bedf\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.060838 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-combined-ca-bundle\") pod \"77978f8e-132a-4c91-ba44-f15707b3bedf\" (UID: \"77978f8e-132a-4c91-ba44-f15707b3bedf\") " Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.066442 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "77978f8e-132a-4c91-ba44-f15707b3bedf" (UID: "77978f8e-132a-4c91-ba44-f15707b3bedf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.070894 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77978f8e-132a-4c91-ba44-f15707b3bedf-kube-api-access-8vmjp" (OuterVolumeSpecName: "kube-api-access-8vmjp") pod "77978f8e-132a-4c91-ba44-f15707b3bedf" (UID: "77978f8e-132a-4c91-ba44-f15707b3bedf"). InnerVolumeSpecName "kube-api-access-8vmjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.148869 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77978f8e-132a-4c91-ba44-f15707b3bedf" (UID: "77978f8e-132a-4c91-ba44-f15707b3bedf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.162915 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.162942 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vmjp\" (UniqueName: \"kubernetes.io/projected/77978f8e-132a-4c91-ba44-f15707b3bedf-kube-api-access-8vmjp\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.162951 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.166977 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data" (OuterVolumeSpecName: "config-data") pod "77978f8e-132a-4c91-ba44-f15707b3bedf" (UID: "77978f8e-132a-4c91-ba44-f15707b3bedf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.264643 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77978f8e-132a-4c91-ba44-f15707b3bedf-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.639831 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-86jb7" event={"ID":"697d83c1-bcef-40ab-b260-070417df0a62","Type":"ContainerStarted","Data":"0d3f55d1e96ba4c67d4b54d5cd430c94cc1a55a8cc7fce91c676a45b9575e56b"} Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.645183 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c779fd9d8-2bxwh" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.645364 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c779fd9d8-2bxwh" event={"ID":"77978f8e-132a-4c91-ba44-f15707b3bedf","Type":"ContainerDied","Data":"121fc60102b12208606d42993122dde8738b4a358017e91a939ee8e73dbe3dde"} Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.645427 4805 scope.go:117] "RemoveContainer" containerID="c06cec2b2b07f3102e1992bdbd48ee89ae92051112f5a7c781e0dedae9d97682" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.650149 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerStarted","Data":"46c9b93e78f7ac86d999ac1caba323cb44d2f6d6c1347548ae1a13e7fbdffa33"} Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.650313 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="ceilometer-central-agent" containerID="cri-o://60342da06db413beefe1f64c05609d8868547073f8d629259fffe78155b1d28f" gracePeriod=30 Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.650367 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.650398 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="proxy-httpd" containerID="cri-o://46c9b93e78f7ac86d999ac1caba323cb44d2f6d6c1347548ae1a13e7fbdffa33" gracePeriod=30 Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.650402 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="sg-core" containerID="cri-o://2942df01f24d98b99a05fc122a9133788aa5a1723f3fa3dbabdece6b7b2a8416" gracePeriod=30 Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.650420 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="ceilometer-notification-agent" containerID="cri-o://ed8e187b334193b84809d619bf069b193c21e8996edd6dcb9e2d22504245fe24" gracePeriod=30 Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.671066 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-86jb7" podStartSLOduration=2.470699627 podStartE2EDuration="15.669716424s" podCreationTimestamp="2026-02-16 21:17:40 +0000 UTC" firstStartedPulling="2026-02-16 21:17:41.430794974 +0000 UTC m=+1279.249478269" lastFinishedPulling="2026-02-16 21:17:54.629811771 +0000 UTC m=+1292.448495066" observedRunningTime="2026-02-16 21:17:55.667707349 +0000 UTC m=+1293.486390644" watchObservedRunningTime="2026-02-16 21:17:55.669716424 +0000 UTC m=+1293.488399719" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.702703 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.041197158 podStartE2EDuration="15.702676098s" podCreationTimestamp="2026-02-16 21:17:40 +0000 UTC" firstStartedPulling="2026-02-16 21:17:41.946155055 +0000 UTC m=+1279.764838350" lastFinishedPulling="2026-02-16 21:17:54.607633995 +0000 UTC m=+1292.426317290" observedRunningTime="2026-02-16 21:17:55.695708591 +0000 UTC m=+1293.514391896" watchObservedRunningTime="2026-02-16 21:17:55.702676098 +0000 UTC m=+1293.521359393" Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.730248 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-c779fd9d8-2bxwh"] Feb 16 21:17:55 crc kubenswrapper[4805]: I0216 21:17:55.744941 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-c779fd9d8-2bxwh"] Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.665410 4805 generic.go:334] "Generic (PLEG): container finished" podID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerID="46c9b93e78f7ac86d999ac1caba323cb44d2f6d6c1347548ae1a13e7fbdffa33" exitCode=0 Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.665800 4805 generic.go:334] "Generic (PLEG): container finished" podID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerID="2942df01f24d98b99a05fc122a9133788aa5a1723f3fa3dbabdece6b7b2a8416" exitCode=2 Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.665812 4805 generic.go:334] "Generic (PLEG): container finished" podID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerID="ed8e187b334193b84809d619bf069b193c21e8996edd6dcb9e2d22504245fe24" exitCode=0 Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.665823 4805 generic.go:334] "Generic (PLEG): container finished" podID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerID="60342da06db413beefe1f64c05609d8868547073f8d629259fffe78155b1d28f" exitCode=0 Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.665496 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerDied","Data":"46c9b93e78f7ac86d999ac1caba323cb44d2f6d6c1347548ae1a13e7fbdffa33"} Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.665895 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerDied","Data":"2942df01f24d98b99a05fc122a9133788aa5a1723f3fa3dbabdece6b7b2a8416"} Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.665911 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerDied","Data":"ed8e187b334193b84809d619bf069b193c21e8996edd6dcb9e2d22504245fe24"} Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.665927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerDied","Data":"60342da06db413beefe1f64c05609d8868547073f8d629259fffe78155b1d28f"} Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.812887 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.999746 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-scripts\") pod \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " Feb 16 21:17:56 crc kubenswrapper[4805]: I0216 21:17:56.999807 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-sg-core-conf-yaml\") pod \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:56.999912 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-combined-ca-bundle\") pod \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:56.999933 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-log-httpd\") pod \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:56.999976 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2222p\" (UniqueName: \"kubernetes.io/projected/50b7f8fd-bd74-4136-b349-cb398ee9d44e-kube-api-access-2222p\") pod \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.000046 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-run-httpd\") pod \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.000107 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-config-data\") pod \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\" (UID: \"50b7f8fd-bd74-4136-b349-cb398ee9d44e\") " Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.001049 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "50b7f8fd-bd74-4136-b349-cb398ee9d44e" (UID: "50b7f8fd-bd74-4136-b349-cb398ee9d44e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.001212 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "50b7f8fd-bd74-4136-b349-cb398ee9d44e" (UID: "50b7f8fd-bd74-4136-b349-cb398ee9d44e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.011026 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-scripts" (OuterVolumeSpecName: "scripts") pod "50b7f8fd-bd74-4136-b349-cb398ee9d44e" (UID: "50b7f8fd-bd74-4136-b349-cb398ee9d44e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.031493 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b7f8fd-bd74-4136-b349-cb398ee9d44e-kube-api-access-2222p" (OuterVolumeSpecName: "kube-api-access-2222p") pod "50b7f8fd-bd74-4136-b349-cb398ee9d44e" (UID: "50b7f8fd-bd74-4136-b349-cb398ee9d44e"). InnerVolumeSpecName "kube-api-access-2222p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.044015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "50b7f8fd-bd74-4136-b349-cb398ee9d44e" (UID: "50b7f8fd-bd74-4136-b349-cb398ee9d44e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.100518 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50b7f8fd-bd74-4136-b349-cb398ee9d44e" (UID: "50b7f8fd-bd74-4136-b349-cb398ee9d44e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.103153 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.103182 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.103193 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.103202 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.103212 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50b7f8fd-bd74-4136-b349-cb398ee9d44e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.103220 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2222p\" (UniqueName: \"kubernetes.io/projected/50b7f8fd-bd74-4136-b349-cb398ee9d44e-kube-api-access-2222p\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.151038 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-config-data" (OuterVolumeSpecName: "config-data") pod "50b7f8fd-bd74-4136-b349-cb398ee9d44e" (UID: "50b7f8fd-bd74-4136-b349-cb398ee9d44e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.205074 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b7f8fd-bd74-4136-b349-cb398ee9d44e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.616091 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77978f8e-132a-4c91-ba44-f15707b3bedf" path="/var/lib/kubelet/pods/77978f8e-132a-4c91-ba44-f15707b3bedf/volumes" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.687311 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50b7f8fd-bd74-4136-b349-cb398ee9d44e","Type":"ContainerDied","Data":"f891e52f0bfdcff739ca76960893031276c9c57f72698b3ad08bfeb94f2e20b7"} Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.687359 4805 scope.go:117] "RemoveContainer" containerID="46c9b93e78f7ac86d999ac1caba323cb44d2f6d6c1347548ae1a13e7fbdffa33" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.687492 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.739801 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.744689 4805 scope.go:117] "RemoveContainer" containerID="2942df01f24d98b99a05fc122a9133788aa5a1723f3fa3dbabdece6b7b2a8416" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.807052 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.827303 4805 scope.go:117] "RemoveContainer" containerID="ed8e187b334193b84809d619bf069b193c21e8996edd6dcb9e2d22504245fe24" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.837929 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:57 crc kubenswrapper[4805]: E0216 21:17:57.838786 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="proxy-httpd" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.838811 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="proxy-httpd" Feb 16 21:17:57 crc kubenswrapper[4805]: E0216 21:17:57.838825 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="ceilometer-notification-agent" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.838833 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="ceilometer-notification-agent" Feb 16 21:17:57 crc kubenswrapper[4805]: E0216 21:17:57.838855 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29995ab-edfe-486b-bae9-a35226de0320" containerName="heat-cfnapi" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.838863 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29995ab-edfe-486b-bae9-a35226de0320" containerName="heat-cfnapi" Feb 16 21:17:57 crc kubenswrapper[4805]: E0216 21:17:57.838889 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77978f8e-132a-4c91-ba44-f15707b3bedf" containerName="heat-api" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.838898 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="77978f8e-132a-4c91-ba44-f15707b3bedf" containerName="heat-api" Feb 16 21:17:57 crc kubenswrapper[4805]: E0216 21:17:57.838920 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="sg-core" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.838928 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="sg-core" Feb 16 21:17:57 crc kubenswrapper[4805]: E0216 21:17:57.838937 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="ceilometer-central-agent" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.838945 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="ceilometer-central-agent" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.839213 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="proxy-httpd" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.839238 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29995ab-edfe-486b-bae9-a35226de0320" containerName="heat-cfnapi" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.839248 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="ceilometer-central-agent" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.839268 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="sg-core" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.839281 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="77978f8e-132a-4c91-ba44-f15707b3bedf" containerName="heat-api" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.839290 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" containerName="ceilometer-notification-agent" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.842045 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.846933 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.847255 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.862263 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.891549 4805 scope.go:117] "RemoveContainer" containerID="60342da06db413beefe1f64c05609d8868547073f8d629259fffe78155b1d28f" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.928076 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-log-httpd\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.928217 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-scripts\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.928283 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-run-httpd\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.928426 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.928553 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-config-data\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.928636 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:57 crc kubenswrapper[4805]: I0216 21:17:57.928663 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-429d4\" (UniqueName: \"kubernetes.io/projected/666e47bf-382e-4e38-91ce-81ea94c67208-kube-api-access-429d4\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.006105 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7587bc9c56-x54w7" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.030997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-config-data\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.031067 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.031091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-429d4\" (UniqueName: \"kubernetes.io/projected/666e47bf-382e-4e38-91ce-81ea94c67208-kube-api-access-429d4\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.031487 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-log-httpd\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.031892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-log-httpd\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.031973 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-scripts\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.032338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-run-httpd\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.032454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.032953 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-run-httpd\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.039370 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-scripts\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.041202 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.041650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-config-data\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.046658 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.058492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-429d4\" (UniqueName: \"kubernetes.io/projected/666e47bf-382e-4e38-91ce-81ea94c67208-kube-api-access-429d4\") pod \"ceilometer-0\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.091305 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-89694dcd7-8mhwc"] Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.148086 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-748d64cf47-dqzh6" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.166265 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.244523 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-744875bc86-hlx4w"] Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.653617 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.703269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-744875bc86-hlx4w" event={"ID":"5beda969-4d8e-4f58-8ea5-8249f1d6a77b","Type":"ContainerDied","Data":"ef14d7d5b9d9e4e118e1b9282e186bf2f6754086278b9070a48ee83c09cfdc6a"} Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.703314 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef14d7d5b9d9e4e118e1b9282e186bf2f6754086278b9070a48ee83c09cfdc6a" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.705672 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-89694dcd7-8mhwc" event={"ID":"38227f5c-7618-4f26-a240-857ca856b32e","Type":"ContainerDied","Data":"c3505ac92734a16d51e74e99f51397ae385fb285c29ab2caeb103231f08ea57d"} Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.705705 4805 scope.go:117] "RemoveContainer" containerID="c39aaad9fd528ef92218e00c6d135e5182bb40208c8b83a5dbb238797e154957" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.705822 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-89694dcd7-8mhwc" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.715184 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.749489 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data\") pod \"38227f5c-7618-4f26-a240-857ca856b32e\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.749781 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm2kh\" (UniqueName: \"kubernetes.io/projected/38227f5c-7618-4f26-a240-857ca856b32e-kube-api-access-dm2kh\") pod \"38227f5c-7618-4f26-a240-857ca856b32e\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.749845 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data-custom\") pod \"38227f5c-7618-4f26-a240-857ca856b32e\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.749953 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-combined-ca-bundle\") pod \"38227f5c-7618-4f26-a240-857ca856b32e\" (UID: \"38227f5c-7618-4f26-a240-857ca856b32e\") " Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.758865 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38227f5c-7618-4f26-a240-857ca856b32e" (UID: "38227f5c-7618-4f26-a240-857ca856b32e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.762926 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38227f5c-7618-4f26-a240-857ca856b32e-kube-api-access-dm2kh" (OuterVolumeSpecName: "kube-api-access-dm2kh") pod "38227f5c-7618-4f26-a240-857ca856b32e" (UID: "38227f5c-7618-4f26-a240-857ca856b32e"). InnerVolumeSpecName "kube-api-access-dm2kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.790592 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38227f5c-7618-4f26-a240-857ca856b32e" (UID: "38227f5c-7618-4f26-a240-857ca856b32e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.814041 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data" (OuterVolumeSpecName: "config-data") pod "38227f5c-7618-4f26-a240-857ca856b32e" (UID: "38227f5c-7618-4f26-a240-857ca856b32e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:58 crc kubenswrapper[4805]: W0216 21:17:58.829323 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod666e47bf_382e_4e38_91ce_81ea94c67208.slice/crio-962d84605418db10a62b179814ed1591e0e47891990419076bcebf948288d28a WatchSource:0}: Error finding container 962d84605418db10a62b179814ed1591e0e47891990419076bcebf948288d28a: Status 404 returned error can't find the container with id 962d84605418db10a62b179814ed1591e0e47891990419076bcebf948288d28a Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.834577 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.852241 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data\") pod \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.852335 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data-custom\") pod \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.852398 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98wt9\" (UniqueName: \"kubernetes.io/projected/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-kube-api-access-98wt9\") pod \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.852478 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-combined-ca-bundle\") pod \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\" (UID: \"5beda969-4d8e-4f58-8ea5-8249f1d6a77b\") " Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.852998 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.853017 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.853027 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm2kh\" (UniqueName: \"kubernetes.io/projected/38227f5c-7618-4f26-a240-857ca856b32e-kube-api-access-dm2kh\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.853038 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38227f5c-7618-4f26-a240-857ca856b32e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.856707 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5beda969-4d8e-4f58-8ea5-8249f1d6a77b" (UID: "5beda969-4d8e-4f58-8ea5-8249f1d6a77b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.858135 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-kube-api-access-98wt9" (OuterVolumeSpecName: "kube-api-access-98wt9") pod "5beda969-4d8e-4f58-8ea5-8249f1d6a77b" (UID: "5beda969-4d8e-4f58-8ea5-8249f1d6a77b"). InnerVolumeSpecName "kube-api-access-98wt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.882971 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5beda969-4d8e-4f58-8ea5-8249f1d6a77b" (UID: "5beda969-4d8e-4f58-8ea5-8249f1d6a77b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.911222 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data" (OuterVolumeSpecName: "config-data") pod "5beda969-4d8e-4f58-8ea5-8249f1d6a77b" (UID: "5beda969-4d8e-4f58-8ea5-8249f1d6a77b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.955527 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.955564 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.955574 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:58 crc kubenswrapper[4805]: I0216 21:17:58.955583 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98wt9\" (UniqueName: \"kubernetes.io/projected/5beda969-4d8e-4f58-8ea5-8249f1d6a77b-kube-api-access-98wt9\") on node \"crc\" DevicePath \"\"" Feb 16 21:17:59 crc kubenswrapper[4805]: I0216 21:17:59.049506 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-89694dcd7-8mhwc"] Feb 16 21:17:59 crc kubenswrapper[4805]: I0216 21:17:59.066563 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-89694dcd7-8mhwc"] Feb 16 21:17:59 crc kubenswrapper[4805]: I0216 21:17:59.613641 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38227f5c-7618-4f26-a240-857ca856b32e" path="/var/lib/kubelet/pods/38227f5c-7618-4f26-a240-857ca856b32e/volumes" Feb 16 21:17:59 crc kubenswrapper[4805]: I0216 21:17:59.614495 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b7f8fd-bd74-4136-b349-cb398ee9d44e" path="/var/lib/kubelet/pods/50b7f8fd-bd74-4136-b349-cb398ee9d44e/volumes" Feb 16 21:17:59 crc kubenswrapper[4805]: I0216 21:17:59.763996 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerStarted","Data":"3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9"} Feb 16 21:17:59 crc kubenswrapper[4805]: I0216 21:17:59.764067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerStarted","Data":"962d84605418db10a62b179814ed1591e0e47891990419076bcebf948288d28a"} Feb 16 21:17:59 crc kubenswrapper[4805]: I0216 21:17:59.764020 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-744875bc86-hlx4w" Feb 16 21:17:59 crc kubenswrapper[4805]: I0216 21:17:59.793615 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-744875bc86-hlx4w"] Feb 16 21:17:59 crc kubenswrapper[4805]: I0216 21:17:59.804573 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-744875bc86-hlx4w"] Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.260186 4805 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod7352be72-3bf9-4377-a713-ab6058b6785f"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod7352be72-3bf9-4377-a713-ab6058b6785f] : Timed out while waiting for systemd to remove kubepods-besteffort-pod7352be72_3bf9_4377_a713_ab6058b6785f.slice" Feb 16 21:18:00 crc kubenswrapper[4805]: E0216 21:18:00.260237 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod7352be72-3bf9-4377-a713-ab6058b6785f] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod7352be72-3bf9-4377-a713-ab6058b6785f] : Timed out while waiting for systemd to remove kubepods-besteffort-pod7352be72_3bf9_4377_a713_ab6058b6785f.slice" pod="openstack/glance-default-external-api-0" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.774517 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.774505 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerStarted","Data":"d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf"} Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.823534 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.834350 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.857234 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:18:00 crc kubenswrapper[4805]: E0216 21:18:00.857824 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" containerName="heat-cfnapi" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.857846 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" containerName="heat-cfnapi" Feb 16 21:18:00 crc kubenswrapper[4805]: E0216 21:18:00.857880 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38227f5c-7618-4f26-a240-857ca856b32e" containerName="heat-api" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.857887 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="38227f5c-7618-4f26-a240-857ca856b32e" containerName="heat-api" Feb 16 21:18:00 crc kubenswrapper[4805]: E0216 21:18:00.857899 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38227f5c-7618-4f26-a240-857ca856b32e" containerName="heat-api" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.857906 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="38227f5c-7618-4f26-a240-857ca856b32e" containerName="heat-api" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.858127 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="38227f5c-7618-4f26-a240-857ca856b32e" containerName="heat-api" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.858151 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="38227f5c-7618-4f26-a240-857ca856b32e" containerName="heat-api" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.858162 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" containerName="heat-cfnapi" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.858176 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" containerName="heat-cfnapi" Feb 16 21:18:00 crc kubenswrapper[4805]: E0216 21:18:00.858408 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" containerName="heat-cfnapi" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.858418 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" containerName="heat-cfnapi" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.859719 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.864525 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.864705 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:18:00 crc kubenswrapper[4805]: I0216 21:18:00.872968 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.006215 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e698d49d-5318-412e-98aa-1b979e265892-logs\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.006299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-config-data\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.006339 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e698d49d-5318-412e-98aa-1b979e265892-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.006418 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.006445 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzqvj\" (UniqueName: \"kubernetes.io/projected/e698d49d-5318-412e-98aa-1b979e265892-kube-api-access-zzqvj\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.006467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.006570 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-scripts\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.006590 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.109100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzqvj\" (UniqueName: \"kubernetes.io/projected/e698d49d-5318-412e-98aa-1b979e265892-kube-api-access-zzqvj\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.109157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.109255 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-scripts\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.109275 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.109372 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e698d49d-5318-412e-98aa-1b979e265892-logs\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.109424 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-config-data\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.109462 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e698d49d-5318-412e-98aa-1b979e265892-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.109517 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.110148 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e698d49d-5318-412e-98aa-1b979e265892-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.111645 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e698d49d-5318-412e-98aa-1b979e265892-logs\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.119009 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-scripts\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.120606 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.120663 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4cb8b45edeb659dc9877cd079ad06833b4a4e61f890a1a00cd5e71596d9e0ea/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.135142 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzqvj\" (UniqueName: \"kubernetes.io/projected/e698d49d-5318-412e-98aa-1b979e265892-kube-api-access-zzqvj\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.138762 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-config-data\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.142366 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.142397 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e698d49d-5318-412e-98aa-1b979e265892-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.197214 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f93d250e-b474-4652-90b3-558818d0e8aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f93d250e-b474-4652-90b3-558818d0e8aa\") pod \"glance-default-external-api-0\" (UID: \"e698d49d-5318-412e-98aa-1b979e265892\") " pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.207806 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.630033 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5beda969-4d8e-4f58-8ea5-8249f1d6a77b" path="/var/lib/kubelet/pods/5beda969-4d8e-4f58-8ea5-8249f1d6a77b/volumes" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.631282 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7352be72-3bf9-4377-a713-ab6058b6785f" path="/var/lib/kubelet/pods/7352be72-3bf9-4377-a713-ab6058b6785f/volumes" Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.788906 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerStarted","Data":"d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577"} Feb 16 21:18:01 crc kubenswrapper[4805]: W0216 21:18:01.903858 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode698d49d_5318_412e_98aa_1b979e265892.slice/crio-a347536e37689fb7b2a6893a711a6d32abe412d2060bc38f3f543c08dd36e133 WatchSource:0}: Error finding container a347536e37689fb7b2a6893a711a6d32abe412d2060bc38f3f543c08dd36e133: Status 404 returned error can't find the container with id a347536e37689fb7b2a6893a711a6d32abe412d2060bc38f3f543c08dd36e133 Feb 16 21:18:01 crc kubenswrapper[4805]: I0216 21:18:01.907657 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:18:02 crc kubenswrapper[4805]: I0216 21:18:02.728474 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-55c677d475-j7xgs" Feb 16 21:18:02 crc kubenswrapper[4805]: I0216 21:18:02.781929 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-58889dd686-zfmvp"] Feb 16 21:18:02 crc kubenswrapper[4805]: I0216 21:18:02.782127 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-58889dd686-zfmvp" podUID="dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" containerName="heat-engine" containerID="cri-o://be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990" gracePeriod=60 Feb 16 21:18:02 crc kubenswrapper[4805]: I0216 21:18:02.823820 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerStarted","Data":"545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa"} Feb 16 21:18:02 crc kubenswrapper[4805]: I0216 21:18:02.825825 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:18:02 crc kubenswrapper[4805]: I0216 21:18:02.853120 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e698d49d-5318-412e-98aa-1b979e265892","Type":"ContainerStarted","Data":"35d527ea7cc57cc68c2682d96a90598b1a13c4ec59ed9d32deb8ec701658a642"} Feb 16 21:18:02 crc kubenswrapper[4805]: I0216 21:18:02.853168 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e698d49d-5318-412e-98aa-1b979e265892","Type":"ContainerStarted","Data":"a347536e37689fb7b2a6893a711a6d32abe412d2060bc38f3f543c08dd36e133"} Feb 16 21:18:02 crc kubenswrapper[4805]: I0216 21:18:02.864109 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.683587171 podStartE2EDuration="5.864092113s" podCreationTimestamp="2026-02-16 21:17:57 +0000 UTC" firstStartedPulling="2026-02-16 21:17:58.831962724 +0000 UTC m=+1296.650646019" lastFinishedPulling="2026-02-16 21:18:02.012467666 +0000 UTC m=+1299.831150961" observedRunningTime="2026-02-16 21:18:02.861109473 +0000 UTC m=+1300.679792768" watchObservedRunningTime="2026-02-16 21:18:02.864092113 +0000 UTC m=+1300.682775408" Feb 16 21:18:03 crc kubenswrapper[4805]: I0216 21:18:03.865342 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e698d49d-5318-412e-98aa-1b979e265892","Type":"ContainerStarted","Data":"0cc302a7854464f2a546dd79a9789bb702a88d6d9d0fe6d1d5573419000db54f"} Feb 16 21:18:03 crc kubenswrapper[4805]: I0216 21:18:03.893215 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.893186145 podStartE2EDuration="3.893186145s" podCreationTimestamp="2026-02-16 21:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:03.887166554 +0000 UTC m=+1301.705849849" watchObservedRunningTime="2026-02-16 21:18:03.893186145 +0000 UTC m=+1301.711869450" Feb 16 21:18:04 crc kubenswrapper[4805]: E0216 21:18:04.186246 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 21:18:04 crc kubenswrapper[4805]: E0216 21:18:04.188322 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 21:18:04 crc kubenswrapper[4805]: E0216 21:18:04.190154 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 21:18:04 crc kubenswrapper[4805]: E0216 21:18:04.190223 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-58889dd686-zfmvp" podUID="dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" containerName="heat-engine" Feb 16 21:18:04 crc kubenswrapper[4805]: I0216 21:18:04.464956 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:05 crc kubenswrapper[4805]: I0216 21:18:05.883442 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="ceilometer-central-agent" containerID="cri-o://3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9" gracePeriod=30 Feb 16 21:18:05 crc kubenswrapper[4805]: I0216 21:18:05.884004 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="sg-core" containerID="cri-o://d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577" gracePeriod=30 Feb 16 21:18:05 crc kubenswrapper[4805]: I0216 21:18:05.884046 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="ceilometer-notification-agent" containerID="cri-o://d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf" gracePeriod=30 Feb 16 21:18:05 crc kubenswrapper[4805]: I0216 21:18:05.884011 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="proxy-httpd" containerID="cri-o://545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa" gracePeriod=30 Feb 16 21:18:06 crc kubenswrapper[4805]: I0216 21:18:06.896094 4805 generic.go:334] "Generic (PLEG): container finished" podID="666e47bf-382e-4e38-91ce-81ea94c67208" containerID="545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa" exitCode=0 Feb 16 21:18:06 crc kubenswrapper[4805]: I0216 21:18:06.896489 4805 generic.go:334] "Generic (PLEG): container finished" podID="666e47bf-382e-4e38-91ce-81ea94c67208" containerID="d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577" exitCode=2 Feb 16 21:18:06 crc kubenswrapper[4805]: I0216 21:18:06.896498 4805 generic.go:334] "Generic (PLEG): container finished" podID="666e47bf-382e-4e38-91ce-81ea94c67208" containerID="d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf" exitCode=0 Feb 16 21:18:06 crc kubenswrapper[4805]: I0216 21:18:06.896192 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerDied","Data":"545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa"} Feb 16 21:18:06 crc kubenswrapper[4805]: I0216 21:18:06.896577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerDied","Data":"d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577"} Feb 16 21:18:06 crc kubenswrapper[4805]: I0216 21:18:06.896596 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerDied","Data":"d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf"} Feb 16 21:18:06 crc kubenswrapper[4805]: I0216 21:18:06.898782 4805 generic.go:334] "Generic (PLEG): container finished" podID="697d83c1-bcef-40ab-b260-070417df0a62" containerID="0d3f55d1e96ba4c67d4b54d5cd430c94cc1a55a8cc7fce91c676a45b9575e56b" exitCode=0 Feb 16 21:18:06 crc kubenswrapper[4805]: I0216 21:18:06.898840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-86jb7" event={"ID":"697d83c1-bcef-40ab-b260-070417df0a62","Type":"ContainerDied","Data":"0d3f55d1e96ba4c67d4b54d5cd430c94cc1a55a8cc7fce91c676a45b9575e56b"} Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.099662 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.100042 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.100090 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.101111 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3695f3bf70d1d75f31deaf59ecf0f2732a5f8a503501ca8da83dcad9ebd6dcda"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.101173 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://3695f3bf70d1d75f31deaf59ecf0f2732a5f8a503501ca8da83dcad9ebd6dcda" gracePeriod=600 Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.380274 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.527277 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-combined-ca-bundle\") pod \"697d83c1-bcef-40ab-b260-070417df0a62\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.527671 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-config-data\") pod \"697d83c1-bcef-40ab-b260-070417df0a62\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.527783 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-scripts\") pod \"697d83c1-bcef-40ab-b260-070417df0a62\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.527904 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lc7c\" (UniqueName: \"kubernetes.io/projected/697d83c1-bcef-40ab-b260-070417df0a62-kube-api-access-2lc7c\") pod \"697d83c1-bcef-40ab-b260-070417df0a62\" (UID: \"697d83c1-bcef-40ab-b260-070417df0a62\") " Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.539925 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-scripts" (OuterVolumeSpecName: "scripts") pod "697d83c1-bcef-40ab-b260-070417df0a62" (UID: "697d83c1-bcef-40ab-b260-070417df0a62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.540202 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697d83c1-bcef-40ab-b260-070417df0a62-kube-api-access-2lc7c" (OuterVolumeSpecName: "kube-api-access-2lc7c") pod "697d83c1-bcef-40ab-b260-070417df0a62" (UID: "697d83c1-bcef-40ab-b260-070417df0a62"). InnerVolumeSpecName "kube-api-access-2lc7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.564544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "697d83c1-bcef-40ab-b260-070417df0a62" (UID: "697d83c1-bcef-40ab-b260-070417df0a62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.565051 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-config-data" (OuterVolumeSpecName: "config-data") pod "697d83c1-bcef-40ab-b260-070417df0a62" (UID: "697d83c1-bcef-40ab-b260-070417df0a62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.631610 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.631644 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.631660 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/697d83c1-bcef-40ab-b260-070417df0a62-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.631674 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lc7c\" (UniqueName: \"kubernetes.io/projected/697d83c1-bcef-40ab-b260-070417df0a62-kube-api-access-2lc7c\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.953888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-86jb7" event={"ID":"697d83c1-bcef-40ab-b260-070417df0a62","Type":"ContainerDied","Data":"21fde207bbc0e2160932fd610c58d471c43a9e84631d74ee2347e91eb07e847a"} Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.953931 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21fde207bbc0e2160932fd610c58d471c43a9e84631d74ee2347e91eb07e847a" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.954034 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-86jb7" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.961626 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"3695f3bf70d1d75f31deaf59ecf0f2732a5f8a503501ca8da83dcad9ebd6dcda"} Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.961625 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="3695f3bf70d1d75f31deaf59ecf0f2732a5f8a503501ca8da83dcad9ebd6dcda" exitCode=0 Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.961673 4805 scope.go:117] "RemoveContainer" containerID="5f1616af32f423ba92145c911bf150c6fe834753890981f8e09fc4faccf82ee6" Feb 16 21:18:08 crc kubenswrapper[4805]: I0216 21:18:08.961704 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e"} Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.064550 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 21:18:09 crc kubenswrapper[4805]: E0216 21:18:09.065011 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697d83c1-bcef-40ab-b260-070417df0a62" containerName="nova-cell0-conductor-db-sync" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.065029 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="697d83c1-bcef-40ab-b260-070417df0a62" containerName="nova-cell0-conductor-db-sync" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.065223 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="697d83c1-bcef-40ab-b260-070417df0a62" containerName="nova-cell0-conductor-db-sync" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.065993 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.068763 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fbgc6" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.069299 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.083870 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.250980 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dbc69b-9207-42a2-becf-d09dc88763cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"14dbc69b-9207-42a2-becf-d09dc88763cf\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.251468 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp9mm\" (UniqueName: \"kubernetes.io/projected/14dbc69b-9207-42a2-becf-d09dc88763cf-kube-api-access-zp9mm\") pod \"nova-cell0-conductor-0\" (UID: \"14dbc69b-9207-42a2-becf-d09dc88763cf\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.251628 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14dbc69b-9207-42a2-becf-d09dc88763cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"14dbc69b-9207-42a2-becf-d09dc88763cf\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.354531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dbc69b-9207-42a2-becf-d09dc88763cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"14dbc69b-9207-42a2-becf-d09dc88763cf\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.354600 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp9mm\" (UniqueName: \"kubernetes.io/projected/14dbc69b-9207-42a2-becf-d09dc88763cf-kube-api-access-zp9mm\") pod \"nova-cell0-conductor-0\" (UID: \"14dbc69b-9207-42a2-becf-d09dc88763cf\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.354837 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14dbc69b-9207-42a2-becf-d09dc88763cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"14dbc69b-9207-42a2-becf-d09dc88763cf\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.359934 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14dbc69b-9207-42a2-becf-d09dc88763cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"14dbc69b-9207-42a2-becf-d09dc88763cf\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.360055 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dbc69b-9207-42a2-becf-d09dc88763cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"14dbc69b-9207-42a2-becf-d09dc88763cf\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.377642 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp9mm\" (UniqueName: \"kubernetes.io/projected/14dbc69b-9207-42a2-becf-d09dc88763cf-kube-api-access-zp9mm\") pod \"nova-cell0-conductor-0\" (UID: \"14dbc69b-9207-42a2-becf-d09dc88763cf\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:09 crc kubenswrapper[4805]: I0216 21:18:09.399783 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:10 crc kubenswrapper[4805]: I0216 21:18:10.521027 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 21:18:10 crc kubenswrapper[4805]: I0216 21:18:10.991228 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"14dbc69b-9207-42a2-becf-d09dc88763cf","Type":"ContainerStarted","Data":"812875d54ada3d3b5a7ca40f83664a7ac8060082fa4614b88ec22def4dce1423"} Feb 16 21:18:10 crc kubenswrapper[4805]: I0216 21:18:10.991842 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:10 crc kubenswrapper[4805]: I0216 21:18:10.991853 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"14dbc69b-9207-42a2-becf-d09dc88763cf","Type":"ContainerStarted","Data":"e0c19550e9b7e562f5c53dee16dc0a063620a6c1a923fc66eae135e3299e4166"} Feb 16 21:18:11 crc kubenswrapper[4805]: I0216 21:18:11.021142 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.02112341 podStartE2EDuration="2.02112341s" podCreationTimestamp="2026-02-16 21:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:11.009128828 +0000 UTC m=+1308.827812123" watchObservedRunningTime="2026-02-16 21:18:11.02112341 +0000 UTC m=+1308.839806705" Feb 16 21:18:11 crc kubenswrapper[4805]: I0216 21:18:11.208515 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:18:11 crc kubenswrapper[4805]: I0216 21:18:11.208750 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:18:11 crc kubenswrapper[4805]: I0216 21:18:11.260995 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:18:11 crc kubenswrapper[4805]: I0216 21:18:11.271624 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:18:12 crc kubenswrapper[4805]: I0216 21:18:12.001750 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:18:12 crc kubenswrapper[4805]: I0216 21:18:12.002110 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:18:12 crc kubenswrapper[4805]: I0216 21:18:12.909953 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:12 crc kubenswrapper[4805]: I0216 21:18:12.917059 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.022005 4805 generic.go:334] "Generic (PLEG): container finished" podID="666e47bf-382e-4e38-91ce-81ea94c67208" containerID="3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9" exitCode=0 Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.022114 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerDied","Data":"3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9"} Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.022147 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"666e47bf-382e-4e38-91ce-81ea94c67208","Type":"ContainerDied","Data":"962d84605418db10a62b179814ed1591e0e47891990419076bcebf948288d28a"} Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.022196 4805 scope.go:117] "RemoveContainer" containerID="545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.022469 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.027023 4805 generic.go:334] "Generic (PLEG): container finished" podID="dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" containerID="be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990" exitCode=0 Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.027364 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58889dd686-zfmvp" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.027870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58889dd686-zfmvp" event={"ID":"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f","Type":"ContainerDied","Data":"be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990"} Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.027921 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58889dd686-zfmvp" event={"ID":"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f","Type":"ContainerDied","Data":"4a124320440ce31c56676b19a2063be9fd485ea9a42940effa5493812ff72b28"} Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.049312 4805 scope.go:117] "RemoveContainer" containerID="d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.061899 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data\") pod \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.061944 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data-custom\") pod \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.061976 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-combined-ca-bundle\") pod \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.062039 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-scripts\") pod \"666e47bf-382e-4e38-91ce-81ea94c67208\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.062060 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-sg-core-conf-yaml\") pod \"666e47bf-382e-4e38-91ce-81ea94c67208\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.062107 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-log-httpd\") pod \"666e47bf-382e-4e38-91ce-81ea94c67208\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.062160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-combined-ca-bundle\") pod \"666e47bf-382e-4e38-91ce-81ea94c67208\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.062210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-config-data\") pod \"666e47bf-382e-4e38-91ce-81ea94c67208\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.062241 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-429d4\" (UniqueName: \"kubernetes.io/projected/666e47bf-382e-4e38-91ce-81ea94c67208-kube-api-access-429d4\") pod \"666e47bf-382e-4e38-91ce-81ea94c67208\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.062309 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-run-httpd\") pod \"666e47bf-382e-4e38-91ce-81ea94c67208\" (UID: \"666e47bf-382e-4e38-91ce-81ea94c67208\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.062371 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcc6t\" (UniqueName: \"kubernetes.io/projected/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-kube-api-access-qcc6t\") pod \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\" (UID: \"dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f\") " Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.062846 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "666e47bf-382e-4e38-91ce-81ea94c67208" (UID: "666e47bf-382e-4e38-91ce-81ea94c67208"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.063141 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "666e47bf-382e-4e38-91ce-81ea94c67208" (UID: "666e47bf-382e-4e38-91ce-81ea94c67208"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.070885 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-scripts" (OuterVolumeSpecName: "scripts") pod "666e47bf-382e-4e38-91ce-81ea94c67208" (UID: "666e47bf-382e-4e38-91ce-81ea94c67208"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.071113 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-kube-api-access-qcc6t" (OuterVolumeSpecName: "kube-api-access-qcc6t") pod "dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" (UID: "dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f"). InnerVolumeSpecName "kube-api-access-qcc6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.073502 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" (UID: "dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.081299 4805 scope.go:117] "RemoveContainer" containerID="d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.086591 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/666e47bf-382e-4e38-91ce-81ea94c67208-kube-api-access-429d4" (OuterVolumeSpecName: "kube-api-access-429d4") pod "666e47bf-382e-4e38-91ce-81ea94c67208" (UID: "666e47bf-382e-4e38-91ce-81ea94c67208"). InnerVolumeSpecName "kube-api-access-429d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.119894 4805 scope.go:117] "RemoveContainer" containerID="3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.122379 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" (UID: "dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.124610 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "666e47bf-382e-4e38-91ce-81ea94c67208" (UID: "666e47bf-382e-4e38-91ce-81ea94c67208"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.149454 4805 scope.go:117] "RemoveContainer" containerID="545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa" Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.152156 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa\": container with ID starting with 545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa not found: ID does not exist" containerID="545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.152234 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa"} err="failed to get container status \"545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa\": rpc error: code = NotFound desc = could not find container \"545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa\": container with ID starting with 545206cf4ca4ee3d29ab74d6ec8ea3fb5b2de79b05b60cf257eefbb8871c8daa not found: ID does not exist" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.152283 4805 scope.go:117] "RemoveContainer" containerID="d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577" Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.155851 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577\": container with ID starting with d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577 not found: ID does not exist" containerID="d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.155908 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577"} err="failed to get container status \"d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577\": rpc error: code = NotFound desc = could not find container \"d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577\": container with ID starting with d4f214c7f0b4ca23dad36aa08f43140ef5a8a78dc62ca943c7ba0543f99cc577 not found: ID does not exist" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.155934 4805 scope.go:117] "RemoveContainer" containerID="d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf" Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.162062 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf\": container with ID starting with d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf not found: ID does not exist" containerID="d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.162378 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf"} err="failed to get container status \"d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf\": rpc error: code = NotFound desc = could not find container \"d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf\": container with ID starting with d3ea9f6327fcb50081bbca12073040fcc15d116bf571049bf3b3b821411d37bf not found: ID does not exist" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.162410 4805 scope.go:117] "RemoveContainer" containerID="3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9" Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.162887 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9\": container with ID starting with 3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9 not found: ID does not exist" containerID="3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.162929 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9"} err="failed to get container status \"3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9\": rpc error: code = NotFound desc = could not find container \"3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9\": container with ID starting with 3ee04c36e5873631e7c3da01c18325bc4718563c22d4f45fcb71e387d61c25d9 not found: ID does not exist" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.162958 4805 scope.go:117] "RemoveContainer" containerID="be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.166876 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.166911 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.166923 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.166935 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-429d4\" (UniqueName: \"kubernetes.io/projected/666e47bf-382e-4e38-91ce-81ea94c67208-kube-api-access-429d4\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.166944 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/666e47bf-382e-4e38-91ce-81ea94c67208-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.166953 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcc6t\" (UniqueName: \"kubernetes.io/projected/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-kube-api-access-qcc6t\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.166961 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.166969 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.180944 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data" (OuterVolumeSpecName: "config-data") pod "dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" (UID: "dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.196513 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "666e47bf-382e-4e38-91ce-81ea94c67208" (UID: "666e47bf-382e-4e38-91ce-81ea94c67208"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.211118 4805 scope.go:117] "RemoveContainer" containerID="be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990" Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.211772 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990\": container with ID starting with be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990 not found: ID does not exist" containerID="be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.211798 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990"} err="failed to get container status \"be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990\": rpc error: code = NotFound desc = could not find container \"be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990\": container with ID starting with be33a0dcd41ae5d548a8027773b68dc34b6fbb8f893a3d852800542c79800990 not found: ID does not exist" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.253440 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-config-data" (OuterVolumeSpecName: "config-data") pod "666e47bf-382e-4e38-91ce-81ea94c67208" (UID: "666e47bf-382e-4e38-91ce-81ea94c67208"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.270682 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.270728 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.270738 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666e47bf-382e-4e38-91ce-81ea94c67208-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.423606 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-58889dd686-zfmvp"] Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.436501 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-58889dd686-zfmvp"] Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.449608 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.465174 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.482545 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.483126 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="proxy-httpd" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483155 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="proxy-httpd" Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.483177 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" containerName="heat-engine" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483186 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" containerName="heat-engine" Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.483205 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="ceilometer-central-agent" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483213 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="ceilometer-central-agent" Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.483222 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="ceilometer-notification-agent" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483228 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="ceilometer-notification-agent" Feb 16 21:18:13 crc kubenswrapper[4805]: E0216 21:18:13.483241 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="sg-core" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483251 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="sg-core" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483468 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="ceilometer-notification-agent" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483487 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" containerName="heat-engine" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483495 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="proxy-httpd" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483508 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="sg-core" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.483518 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" containerName="ceilometer-central-agent" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.485619 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.489025 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.488881 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.492034 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.578501 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-config-data\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.578566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.578751 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-log-httpd\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.578892 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4kmz\" (UniqueName: \"kubernetes.io/projected/125b1879-8e3b-4b4e-9035-0905f9d073c5-kube-api-access-n4kmz\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.579067 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-run-httpd\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.579114 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-scripts\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.579246 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.613693 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="666e47bf-382e-4e38-91ce-81ea94c67208" path="/var/lib/kubelet/pods/666e47bf-382e-4e38-91ce-81ea94c67208/volumes" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.614826 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f" path="/var/lib/kubelet/pods/dbbbb7f6-1ac9-4145-aeab-72b31a4e1b4f/volumes" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.684153 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.684860 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-config-data\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.685207 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.685295 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-log-httpd\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.685510 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4kmz\" (UniqueName: \"kubernetes.io/projected/125b1879-8e3b-4b4e-9035-0905f9d073c5-kube-api-access-n4kmz\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.685891 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-run-httpd\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.685994 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-scripts\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.686191 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-log-httpd\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.686796 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-run-httpd\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.690347 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.691656 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.692764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-scripts\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.692938 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-config-data\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.719248 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4kmz\" (UniqueName: \"kubernetes.io/projected/125b1879-8e3b-4b4e-9035-0905f9d073c5-kube-api-access-n4kmz\") pod \"ceilometer-0\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " pod="openstack/ceilometer-0" Feb 16 21:18:13 crc kubenswrapper[4805]: I0216 21:18:13.814169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:14 crc kubenswrapper[4805]: I0216 21:18:14.360002 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:14 crc kubenswrapper[4805]: W0216 21:18:14.366674 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod125b1879_8e3b_4b4e_9035_0905f9d073c5.slice/crio-ea490ecbaf68e6653a1dabf2ed3995f6e0913bcbb72cae2d56a40b619201c472 WatchSource:0}: Error finding container ea490ecbaf68e6653a1dabf2ed3995f6e0913bcbb72cae2d56a40b619201c472: Status 404 returned error can't find the container with id ea490ecbaf68e6653a1dabf2ed3995f6e0913bcbb72cae2d56a40b619201c472 Feb 16 21:18:14 crc kubenswrapper[4805]: I0216 21:18:14.947980 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:18:14 crc kubenswrapper[4805]: I0216 21:18:14.948346 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:18:14 crc kubenswrapper[4805]: I0216 21:18:14.981505 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:18:15 crc kubenswrapper[4805]: I0216 21:18:15.066990 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerStarted","Data":"ea490ecbaf68e6653a1dabf2ed3995f6e0913bcbb72cae2d56a40b619201c472"} Feb 16 21:18:16 crc kubenswrapper[4805]: I0216 21:18:16.080429 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerStarted","Data":"c76b530550a34d560f0482f9cc359f6a17b9dff76b8e39aea02d7b7c4dd591b6"} Feb 16 21:18:16 crc kubenswrapper[4805]: I0216 21:18:16.081646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerStarted","Data":"25f7c2fcfdabfc1ce0316e1e95e4e6390d14afb0eac1b6f16b9d80af7a0d15dd"} Feb 16 21:18:17 crc kubenswrapper[4805]: I0216 21:18:17.095617 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerStarted","Data":"6febb5428578cc2651774b63ab4f812a590c97a323d8249f73e82898f5fab627"} Feb 16 21:18:19 crc kubenswrapper[4805]: I0216 21:18:19.134259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerStarted","Data":"5f4ec14408d48854407fd281874a396047ec25b1dd447f36439bf732b5e5ef4e"} Feb 16 21:18:19 crc kubenswrapper[4805]: I0216 21:18:19.139352 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:18:19 crc kubenswrapper[4805]: I0216 21:18:19.181075 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.606450233 podStartE2EDuration="6.181053786s" podCreationTimestamp="2026-02-16 21:18:13 +0000 UTC" firstStartedPulling="2026-02-16 21:18:14.370850475 +0000 UTC m=+1312.189533770" lastFinishedPulling="2026-02-16 21:18:17.945454018 +0000 UTC m=+1315.764137323" observedRunningTime="2026-02-16 21:18:19.163247687 +0000 UTC m=+1316.981931022" watchObservedRunningTime="2026-02-16 21:18:19.181053786 +0000 UTC m=+1316.999737091" Feb 16 21:18:19 crc kubenswrapper[4805]: I0216 21:18:19.442812 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.077072 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-kw7z8"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.078831 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.080868 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.081159 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.098536 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kw7z8"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.137454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwtpg\" (UniqueName: \"kubernetes.io/projected/521423fc-6efd-4f61-89f3-f1523eb8e9f5-kube-api-access-rwtpg\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.137595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-scripts\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.137644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-config-data\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.137682 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.239955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-scripts\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.240026 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-config-data\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.240058 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.240179 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwtpg\" (UniqueName: \"kubernetes.io/projected/521423fc-6efd-4f61-89f3-f1523eb8e9f5-kube-api-access-rwtpg\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.250235 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-config-data\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.273886 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-scripts\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.283921 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.285754 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.291239 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.293665 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.303744 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwtpg\" (UniqueName: \"kubernetes.io/projected/521423fc-6efd-4f61-89f3-f1523eb8e9f5-kube-api-access-rwtpg\") pod \"nova-cell0-cell-mapping-kw7z8\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.313322 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.315256 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.330071 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.342090 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-config-data\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.342155 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.342210 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-config-data\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.342229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.342250 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvt5s\" (UniqueName: \"kubernetes.io/projected/c970e977-c22d-46d1-9062-37981c5302dc-kube-api-access-nvt5s\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.342346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-logs\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.342365 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c970e977-c22d-46d1-9062-37981c5302dc-logs\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.342480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqdv\" (UniqueName: \"kubernetes.io/projected/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-kube-api-access-qdqdv\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.342776 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.415632 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.452821 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-logs\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.452864 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c970e977-c22d-46d1-9062-37981c5302dc-logs\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.452963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdqdv\" (UniqueName: \"kubernetes.io/projected/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-kube-api-access-qdqdv\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.452999 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-config-data\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.453027 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.453067 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-config-data\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.453083 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.453100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvt5s\" (UniqueName: \"kubernetes.io/projected/c970e977-c22d-46d1-9062-37981c5302dc-kube-api-access-nvt5s\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.453860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-logs\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.454200 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c970e977-c22d-46d1-9062-37981c5302dc-logs\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.458972 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.466468 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.487335 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-config-data\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.491042 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.491176 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.493107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdqdv\" (UniqueName: \"kubernetes.io/projected/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-kube-api-access-qdqdv\") pod \"nova-metadata-0\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.493129 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.495053 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.509877 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvt5s\" (UniqueName: \"kubernetes.io/projected/c970e977-c22d-46d1-9062-37981c5302dc-kube-api-access-nvt5s\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.515603 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-config-data\") pod \"nova-api-0\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.548061 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.558189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.558269 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.558379 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mblz8\" (UniqueName: \"kubernetes.io/projected/dd638c31-2ddf-4958-8149-7f3ebcb9b844-kube-api-access-mblz8\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.614989 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.621381 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.628621 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.663693 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mblz8\" (UniqueName: \"kubernetes.io/projected/dd638c31-2ddf-4958-8149-7f3ebcb9b844-kube-api-access-mblz8\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.664100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.664753 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.671380 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.674451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.684416 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.688585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.688667 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.710001 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mblz8\" (UniqueName: \"kubernetes.io/projected/dd638c31-2ddf-4958-8149-7f3ebcb9b844-kube-api-access-mblz8\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.744781 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-9wqlb"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.767972 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-9wqlb"] Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.768084 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.776749 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.776816 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-config-data\") pod \"nova-scheduler-0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.776920 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4pfq\" (UniqueName: \"kubernetes.io/projected/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-kube-api-access-s4pfq\") pod \"nova-scheduler-0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.881383 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.881586 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvqx\" (UniqueName: \"kubernetes.io/projected/cf368b03-7df6-43f3-ad40-ec381d152021-kube-api-access-phvqx\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.881675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.881832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.881886 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.881960 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-config-data\") pod \"nova-scheduler-0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.882021 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-config\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.882112 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.882200 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4pfq\" (UniqueName: \"kubernetes.io/projected/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-kube-api-access-s4pfq\") pod \"nova-scheduler-0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.907613 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4pfq\" (UniqueName: \"kubernetes.io/projected/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-kube-api-access-s4pfq\") pod \"nova-scheduler-0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.907769 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-config-data\") pod \"nova-scheduler-0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.920041 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.945702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.984382 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phvqx\" (UniqueName: \"kubernetes.io/projected/cf368b03-7df6-43f3-ad40-ec381d152021-kube-api-access-phvqx\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.984450 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.984529 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.984594 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-config\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.984653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.984748 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.985767 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.986933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.987636 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.987969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-config\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:20 crc kubenswrapper[4805]: I0216 21:18:20.988201 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.013747 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phvqx\" (UniqueName: \"kubernetes.io/projected/cf368b03-7df6-43f3-ad40-ec381d152021-kube-api-access-phvqx\") pod \"dnsmasq-dns-568d7fd7cf-9wqlb\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.033774 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.100938 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.113893 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kw7z8"] Feb 16 21:18:21 crc kubenswrapper[4805]: W0216 21:18:21.154569 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod521423fc_6efd_4f61_89f3_f1523eb8e9f5.slice/crio-7c1655b18cf1cf85febf03e59077384d14a40fda2d6ca5a11ca369f14ebd316f WatchSource:0}: Error finding container 7c1655b18cf1cf85febf03e59077384d14a40fda2d6ca5a11ca369f14ebd316f: Status 404 returned error can't find the container with id 7c1655b18cf1cf85febf03e59077384d14a40fda2d6ca5a11ca369f14ebd316f Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.171195 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kw7z8" event={"ID":"521423fc-6efd-4f61-89f3-f1523eb8e9f5","Type":"ContainerStarted","Data":"7c1655b18cf1cf85febf03e59077384d14a40fda2d6ca5a11ca369f14ebd316f"} Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.428712 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:18:21 crc kubenswrapper[4805]: W0216 21:18:21.613036 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e3de38a_9461_4b9e_a5b1_826e9966a3b2.slice/crio-a48ad126f9e6a2823d5f98b73dcdfff03c7d54066492142ae3521c3014f6043c WatchSource:0}: Error finding container a48ad126f9e6a2823d5f98b73dcdfff03c7d54066492142ae3521c3014f6043c: Status 404 returned error can't find the container with id a48ad126f9e6a2823d5f98b73dcdfff03c7d54066492142ae3521c3014f6043c Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.615616 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.778373 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.876805 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-9wqlb"] Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.914913 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.948961 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gxz95"] Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.955687 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.961057 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.962930 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 21:18:21 crc kubenswrapper[4805]: I0216 21:18:21.965444 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gxz95"] Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.033075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxtj\" (UniqueName: \"kubernetes.io/projected/205f4efe-0a2d-4d28-a929-c89b671cefae-kube-api-access-8bxtj\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.033168 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-config-data\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.033212 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-scripts\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.033314 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.134675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.134855 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxtj\" (UniqueName: \"kubernetes.io/projected/205f4efe-0a2d-4d28-a929-c89b671cefae-kube-api-access-8bxtj\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.134905 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-config-data\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.134939 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-scripts\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.140475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-scripts\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.142423 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-config-data\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.145346 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.168489 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxtj\" (UniqueName: \"kubernetes.io/projected/205f4efe-0a2d-4d28-a929-c89b671cefae-kube-api-access-8bxtj\") pod \"nova-cell1-conductor-db-sync-gxz95\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.199664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kw7z8" event={"ID":"521423fc-6efd-4f61-89f3-f1523eb8e9f5","Type":"ContainerStarted","Data":"6490e84da6532409dec05cfaae4b31e66b16ad62ad6caa714a2ffd4f6ea6c2d3"} Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.207481 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3de38a-9461-4b9e-a5b1-826e9966a3b2","Type":"ContainerStarted","Data":"a48ad126f9e6a2823d5f98b73dcdfff03c7d54066492142ae3521c3014f6043c"} Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.209680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" event={"ID":"cf368b03-7df6-43f3-ad40-ec381d152021","Type":"ContainerStarted","Data":"0d325a4cf9225ea9ace7415998a87c0f80ba64f569223ee4855baed3ac3d4608"} Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.209713 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" event={"ID":"cf368b03-7df6-43f3-ad40-ec381d152021","Type":"ContainerStarted","Data":"baead96a06603392e7fdf83a9a3a28d0fe3f8417931fa9b4b910675010d7d7f9"} Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.220877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0","Type":"ContainerStarted","Data":"695bd2bacb48c5cd55b4fd9e806c09a6705895601537272d5c554d92d3de95d2"} Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.222792 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-kw7z8" podStartSLOduration=2.22277487 podStartE2EDuration="2.22277487s" podCreationTimestamp="2026-02-16 21:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:22.220268342 +0000 UTC m=+1320.038951637" watchObservedRunningTime="2026-02-16 21:18:22.22277487 +0000 UTC m=+1320.041458155" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.224266 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dd638c31-2ddf-4958-8149-7f3ebcb9b844","Type":"ContainerStarted","Data":"2b8500ff0fefd0bc95cce6f866de8b24bb9c098c3a97bb5fbe1a94979eaef23c"} Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.234824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c970e977-c22d-46d1-9062-37981c5302dc","Type":"ContainerStarted","Data":"c22cf7a97e91b5cdccd5936fa7b188a6d3d7ae56d0e3b216d8792dcee3115517"} Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.364564 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:22 crc kubenswrapper[4805]: I0216 21:18:22.931999 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gxz95"] Feb 16 21:18:22 crc kubenswrapper[4805]: W0216 21:18:22.946249 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod205f4efe_0a2d_4d28_a929_c89b671cefae.slice/crio-a686473fe1126b7ee3a0607e3ac4c2cb909389c29e27f4d2bcf130e88c846b71 WatchSource:0}: Error finding container a686473fe1126b7ee3a0607e3ac4c2cb909389c29e27f4d2bcf130e88c846b71: Status 404 returned error can't find the container with id a686473fe1126b7ee3a0607e3ac4c2cb909389c29e27f4d2bcf130e88c846b71 Feb 16 21:18:23 crc kubenswrapper[4805]: I0216 21:18:23.259514 4805 generic.go:334] "Generic (PLEG): container finished" podID="cf368b03-7df6-43f3-ad40-ec381d152021" containerID="0d325a4cf9225ea9ace7415998a87c0f80ba64f569223ee4855baed3ac3d4608" exitCode=0 Feb 16 21:18:23 crc kubenswrapper[4805]: I0216 21:18:23.259686 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" event={"ID":"cf368b03-7df6-43f3-ad40-ec381d152021","Type":"ContainerDied","Data":"0d325a4cf9225ea9ace7415998a87c0f80ba64f569223ee4855baed3ac3d4608"} Feb 16 21:18:23 crc kubenswrapper[4805]: I0216 21:18:23.291532 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gxz95" event={"ID":"205f4efe-0a2d-4d28-a929-c89b671cefae","Type":"ContainerStarted","Data":"70c3496eaa3cdd95cb25af45b1d8d0d1dc143e9edb600988a1d0ede7d7095ac8"} Feb 16 21:18:23 crc kubenswrapper[4805]: I0216 21:18:23.291568 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gxz95" event={"ID":"205f4efe-0a2d-4d28-a929-c89b671cefae","Type":"ContainerStarted","Data":"a686473fe1126b7ee3a0607e3ac4c2cb909389c29e27f4d2bcf130e88c846b71"} Feb 16 21:18:23 crc kubenswrapper[4805]: I0216 21:18:23.339500 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-gxz95" podStartSLOduration=2.339477835 podStartE2EDuration="2.339477835s" podCreationTimestamp="2026-02-16 21:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:23.305120492 +0000 UTC m=+1321.123803787" watchObservedRunningTime="2026-02-16 21:18:23.339477835 +0000 UTC m=+1321.158161130" Feb 16 21:18:24 crc kubenswrapper[4805]: I0216 21:18:24.445109 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:24 crc kubenswrapper[4805]: I0216 21:18:24.564428 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.357245 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3de38a-9461-4b9e-a5b1-826e9966a3b2","Type":"ContainerStarted","Data":"78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7"} Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.357700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3de38a-9461-4b9e-a5b1-826e9966a3b2","Type":"ContainerStarted","Data":"b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987"} Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.357391 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerName="nova-metadata-metadata" containerID="cri-o://78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7" gracePeriod=30 Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.357309 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerName="nova-metadata-log" containerID="cri-o://b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987" gracePeriod=30 Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.364694 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" event={"ID":"cf368b03-7df6-43f3-ad40-ec381d152021","Type":"ContainerStarted","Data":"3267befb626341ec1e07249560475c674806caff52d8f8836cc5ffe148d0e403"} Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.365684 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.374655 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.970642122 podStartE2EDuration="7.374636355s" podCreationTimestamp="2026-02-16 21:18:20 +0000 UTC" firstStartedPulling="2026-02-16 21:18:21.628559305 +0000 UTC m=+1319.447242600" lastFinishedPulling="2026-02-16 21:18:26.032553538 +0000 UTC m=+1323.851236833" observedRunningTime="2026-02-16 21:18:27.374006108 +0000 UTC m=+1325.192689423" watchObservedRunningTime="2026-02-16 21:18:27.374636355 +0000 UTC m=+1325.193319650" Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.375225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0","Type":"ContainerStarted","Data":"a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24"} Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.396107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dd638c31-2ddf-4958-8149-7f3ebcb9b844","Type":"ContainerStarted","Data":"2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f"} Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.396439 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="dd638c31-2ddf-4958-8149-7f3ebcb9b844" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f" gracePeriod=30 Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.415564 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c970e977-c22d-46d1-9062-37981c5302dc","Type":"ContainerStarted","Data":"6278f140e04d5ac7be41f220b3a50ae372905e7f56a8b3e2bfa0d1c5614d57e7"} Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.415607 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c970e977-c22d-46d1-9062-37981c5302dc","Type":"ContainerStarted","Data":"4052690266f057d6cdf2956d14aa0bf7a4e339bae7030ee61237210f5d00874b"} Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.432207 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" podStartSLOduration=7.43219063 podStartE2EDuration="7.43219063s" podCreationTimestamp="2026-02-16 21:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:27.400815667 +0000 UTC m=+1325.219498962" watchObservedRunningTime="2026-02-16 21:18:27.43219063 +0000 UTC m=+1325.250873915" Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.467412 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.467735 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="ceilometer-central-agent" containerID="cri-o://25f7c2fcfdabfc1ce0316e1e95e4e6390d14afb0eac1b6f16b9d80af7a0d15dd" gracePeriod=30 Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.467849 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="proxy-httpd" containerID="cri-o://5f4ec14408d48854407fd281874a396047ec25b1dd447f36439bf732b5e5ef4e" gracePeriod=30 Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.467908 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="ceilometer-notification-agent" containerID="cri-o://c76b530550a34d560f0482f9cc359f6a17b9dff76b8e39aea02d7b7c4dd591b6" gracePeriod=30 Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.468020 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="sg-core" containerID="cri-o://6febb5428578cc2651774b63ab4f812a590c97a323d8249f73e82898f5fab627" gracePeriod=30 Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.472235 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.330106313 podStartE2EDuration="7.472223525s" podCreationTimestamp="2026-02-16 21:18:20 +0000 UTC" firstStartedPulling="2026-02-16 21:18:21.899331055 +0000 UTC m=+1319.718014340" lastFinishedPulling="2026-02-16 21:18:26.041448257 +0000 UTC m=+1323.860131552" observedRunningTime="2026-02-16 21:18:27.439036844 +0000 UTC m=+1325.257720139" watchObservedRunningTime="2026-02-16 21:18:27.472223525 +0000 UTC m=+1325.290906820" Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.491685 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.248292426 podStartE2EDuration="7.491664327s" podCreationTimestamp="2026-02-16 21:18:20 +0000 UTC" firstStartedPulling="2026-02-16 21:18:21.786903106 +0000 UTC m=+1319.605586401" lastFinishedPulling="2026-02-16 21:18:26.030275007 +0000 UTC m=+1323.848958302" observedRunningTime="2026-02-16 21:18:27.461170719 +0000 UTC m=+1325.279854014" watchObservedRunningTime="2026-02-16 21:18:27.491664327 +0000 UTC m=+1325.310347622" Feb 16 21:18:27 crc kubenswrapper[4805]: I0216 21:18:27.578672 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.978930993 podStartE2EDuration="7.578648592s" podCreationTimestamp="2026-02-16 21:18:20 +0000 UTC" firstStartedPulling="2026-02-16 21:18:21.43028078 +0000 UTC m=+1319.248964075" lastFinishedPulling="2026-02-16 21:18:26.029998379 +0000 UTC m=+1323.848681674" observedRunningTime="2026-02-16 21:18:27.499164469 +0000 UTC m=+1325.317847764" watchObservedRunningTime="2026-02-16 21:18:27.578648592 +0000 UTC m=+1325.397331887" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.252153 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.410575 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-logs\") pod \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.410661 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdqdv\" (UniqueName: \"kubernetes.io/projected/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-kube-api-access-qdqdv\") pod \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.410748 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-combined-ca-bundle\") pod \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.410804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-config-data\") pod \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\" (UID: \"7e3de38a-9461-4b9e-a5b1-826e9966a3b2\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.412161 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-logs" (OuterVolumeSpecName: "logs") pod "7e3de38a-9461-4b9e-a5b1-826e9966a3b2" (UID: "7e3de38a-9461-4b9e-a5b1-826e9966a3b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.426967 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-kube-api-access-qdqdv" (OuterVolumeSpecName: "kube-api-access-qdqdv") pod "7e3de38a-9461-4b9e-a5b1-826e9966a3b2" (UID: "7e3de38a-9461-4b9e-a5b1-826e9966a3b2"). InnerVolumeSpecName "kube-api-access-qdqdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.444953 4805 generic.go:334] "Generic (PLEG): container finished" podID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerID="78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7" exitCode=0 Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.445101 4805 generic.go:334] "Generic (PLEG): container finished" podID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerID="b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987" exitCode=143 Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.445224 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3de38a-9461-4b9e-a5b1-826e9966a3b2","Type":"ContainerDied","Data":"78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7"} Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.445297 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3de38a-9461-4b9e-a5b1-826e9966a3b2","Type":"ContainerDied","Data":"b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987"} Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.445351 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3de38a-9461-4b9e-a5b1-826e9966a3b2","Type":"ContainerDied","Data":"a48ad126f9e6a2823d5f98b73dcdfff03c7d54066492142ae3521c3014f6043c"} Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.445416 4805 scope.go:117] "RemoveContainer" containerID="78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.445581 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.459567 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-config-data" (OuterVolumeSpecName: "config-data") pod "7e3de38a-9461-4b9e-a5b1-826e9966a3b2" (UID: "7e3de38a-9461-4b9e-a5b1-826e9966a3b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.467129 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e3de38a-9461-4b9e-a5b1-826e9966a3b2" (UID: "7e3de38a-9461-4b9e-a5b1-826e9966a3b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.467603 4805 generic.go:334] "Generic (PLEG): container finished" podID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerID="5f4ec14408d48854407fd281874a396047ec25b1dd447f36439bf732b5e5ef4e" exitCode=0 Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.467646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerDied","Data":"5f4ec14408d48854407fd281874a396047ec25b1dd447f36439bf732b5e5ef4e"} Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.468380 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerDied","Data":"6febb5428578cc2651774b63ab4f812a590c97a323d8249f73e82898f5fab627"} Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.468204 4805 generic.go:334] "Generic (PLEG): container finished" podID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerID="6febb5428578cc2651774b63ab4f812a590c97a323d8249f73e82898f5fab627" exitCode=2 Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.468645 4805 generic.go:334] "Generic (PLEG): container finished" podID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerID="c76b530550a34d560f0482f9cc359f6a17b9dff76b8e39aea02d7b7c4dd591b6" exitCode=0 Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.468675 4805 generic.go:334] "Generic (PLEG): container finished" podID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerID="25f7c2fcfdabfc1ce0316e1e95e4e6390d14afb0eac1b6f16b9d80af7a0d15dd" exitCode=0 Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.470428 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerDied","Data":"c76b530550a34d560f0482f9cc359f6a17b9dff76b8e39aea02d7b7c4dd591b6"} Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.470462 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerDied","Data":"25f7c2fcfdabfc1ce0316e1e95e4e6390d14afb0eac1b6f16b9d80af7a0d15dd"} Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.524633 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.524963 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdqdv\" (UniqueName: \"kubernetes.io/projected/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-kube-api-access-qdqdv\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.524973 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.524982 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3de38a-9461-4b9e-a5b1-826e9966a3b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.592809 4805 scope.go:117] "RemoveContainer" containerID="b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.627667 4805 scope.go:117] "RemoveContainer" containerID="78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7" Feb 16 21:18:28 crc kubenswrapper[4805]: E0216 21:18:28.628496 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7\": container with ID starting with 78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7 not found: ID does not exist" containerID="78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.628530 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7"} err="failed to get container status \"78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7\": rpc error: code = NotFound desc = could not find container \"78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7\": container with ID starting with 78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7 not found: ID does not exist" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.628550 4805 scope.go:117] "RemoveContainer" containerID="b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987" Feb 16 21:18:28 crc kubenswrapper[4805]: E0216 21:18:28.630589 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987\": container with ID starting with b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987 not found: ID does not exist" containerID="b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.630629 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987"} err="failed to get container status \"b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987\": rpc error: code = NotFound desc = could not find container \"b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987\": container with ID starting with b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987 not found: ID does not exist" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.630652 4805 scope.go:117] "RemoveContainer" containerID="78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.631563 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7"} err="failed to get container status \"78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7\": rpc error: code = NotFound desc = could not find container \"78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7\": container with ID starting with 78c81165aa92dfc275825e01baef305ce494a5e01c494230f1c3dc817982cce7 not found: ID does not exist" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.631599 4805 scope.go:117] "RemoveContainer" containerID="b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.631858 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987"} err="failed to get container status \"b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987\": rpc error: code = NotFound desc = could not find container \"b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987\": container with ID starting with b9706de5596526fb1feca1b235e4a07a5fe0f22e135f80ef4ea55dbb7f75e987 not found: ID does not exist" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.668388 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.730015 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-log-httpd\") pod \"125b1879-8e3b-4b4e-9035-0905f9d073c5\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.730186 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-config-data\") pod \"125b1879-8e3b-4b4e-9035-0905f9d073c5\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.730261 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4kmz\" (UniqueName: \"kubernetes.io/projected/125b1879-8e3b-4b4e-9035-0905f9d073c5-kube-api-access-n4kmz\") pod \"125b1879-8e3b-4b4e-9035-0905f9d073c5\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.730282 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-combined-ca-bundle\") pod \"125b1879-8e3b-4b4e-9035-0905f9d073c5\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.730300 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-run-httpd\") pod \"125b1879-8e3b-4b4e-9035-0905f9d073c5\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.730313 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-scripts\") pod \"125b1879-8e3b-4b4e-9035-0905f9d073c5\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.730363 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-sg-core-conf-yaml\") pod \"125b1879-8e3b-4b4e-9035-0905f9d073c5\" (UID: \"125b1879-8e3b-4b4e-9035-0905f9d073c5\") " Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.731734 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "125b1879-8e3b-4b4e-9035-0905f9d073c5" (UID: "125b1879-8e3b-4b4e-9035-0905f9d073c5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.731804 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "125b1879-8e3b-4b4e-9035-0905f9d073c5" (UID: "125b1879-8e3b-4b4e-9035-0905f9d073c5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.738844 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-scripts" (OuterVolumeSpecName: "scripts") pod "125b1879-8e3b-4b4e-9035-0905f9d073c5" (UID: "125b1879-8e3b-4b4e-9035-0905f9d073c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.748063 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125b1879-8e3b-4b4e-9035-0905f9d073c5-kube-api-access-n4kmz" (OuterVolumeSpecName: "kube-api-access-n4kmz") pod "125b1879-8e3b-4b4e-9035-0905f9d073c5" (UID: "125b1879-8e3b-4b4e-9035-0905f9d073c5"). InnerVolumeSpecName "kube-api-access-n4kmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.792236 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "125b1879-8e3b-4b4e-9035-0905f9d073c5" (UID: "125b1879-8e3b-4b4e-9035-0905f9d073c5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.798322 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.818159 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.835875 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4kmz\" (UniqueName: \"kubernetes.io/projected/125b1879-8e3b-4b4e-9035-0905f9d073c5-kube-api-access-n4kmz\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.838750 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.841801 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.842096 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.842184 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125b1879-8e3b-4b4e-9035-0905f9d073c5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.842294 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:28 crc kubenswrapper[4805]: E0216 21:18:28.842787 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerName="nova-metadata-log" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.842847 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerName="nova-metadata-log" Feb 16 21:18:28 crc kubenswrapper[4805]: E0216 21:18:28.842911 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerName="nova-metadata-metadata" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.842957 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerName="nova-metadata-metadata" Feb 16 21:18:28 crc kubenswrapper[4805]: E0216 21:18:28.843013 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="ceilometer-central-agent" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.843067 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="ceilometer-central-agent" Feb 16 21:18:28 crc kubenswrapper[4805]: E0216 21:18:28.843129 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="sg-core" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.843175 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="sg-core" Feb 16 21:18:28 crc kubenswrapper[4805]: E0216 21:18:28.843231 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="ceilometer-notification-agent" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.843277 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="ceilometer-notification-agent" Feb 16 21:18:28 crc kubenswrapper[4805]: E0216 21:18:28.843326 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="proxy-httpd" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.843376 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="proxy-httpd" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.844431 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="proxy-httpd" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.844627 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="ceilometer-notification-agent" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.845220 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerName="nova-metadata-log" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.845295 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="sg-core" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.845355 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" containerName="nova-metadata-metadata" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.845410 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" containerName="ceilometer-central-agent" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.846567 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.849912 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.849205 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "125b1879-8e3b-4b4e-9035-0905f9d073c5" (UID: "125b1879-8e3b-4b4e-9035-0905f9d073c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.853365 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.854249 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.885648 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-config-data" (OuterVolumeSpecName: "config-data") pod "125b1879-8e3b-4b4e-9035-0905f9d073c5" (UID: "125b1879-8e3b-4b4e-9035-0905f9d073c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.943663 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-config-data\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.943863 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-logs\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.943936 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.944019 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.944048 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddrvh\" (UniqueName: \"kubernetes.io/projected/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-kube-api-access-ddrvh\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.944445 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:28 crc kubenswrapper[4805]: I0216 21:18:28.944480 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125b1879-8e3b-4b4e-9035-0905f9d073c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.046025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.046126 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.046156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddrvh\" (UniqueName: \"kubernetes.io/projected/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-kube-api-access-ddrvh\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.046273 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-config-data\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.046334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-logs\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.046764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-logs\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.050083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-config-data\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.050242 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.051709 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.069245 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddrvh\" (UniqueName: \"kubernetes.io/projected/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-kube-api-access-ddrvh\") pod \"nova-metadata-0\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.176267 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.482867 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125b1879-8e3b-4b4e-9035-0905f9d073c5","Type":"ContainerDied","Data":"ea490ecbaf68e6653a1dabf2ed3995f6e0913bcbb72cae2d56a40b619201c472"} Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.483139 4805 scope.go:117] "RemoveContainer" containerID="5f4ec14408d48854407fd281874a396047ec25b1dd447f36439bf732b5e5ef4e" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.482905 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.509453 4805 scope.go:117] "RemoveContainer" containerID="6febb5428578cc2651774b63ab4f812a590c97a323d8249f73e82898f5fab627" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.534127 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.539255 4805 scope.go:117] "RemoveContainer" containerID="c76b530550a34d560f0482f9cc359f6a17b9dff76b8e39aea02d7b7c4dd591b6" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.544897 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.564047 4805 scope.go:117] "RemoveContainer" containerID="25f7c2fcfdabfc1ce0316e1e95e4e6390d14afb0eac1b6f16b9d80af7a0d15dd" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.567836 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.570628 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.572917 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.578599 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.586809 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.640424 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125b1879-8e3b-4b4e-9035-0905f9d073c5" path="/var/lib/kubelet/pods/125b1879-8e3b-4b4e-9035-0905f9d073c5/volumes" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.642855 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3de38a-9461-4b9e-a5b1-826e9966a3b2" path="/var/lib/kubelet/pods/7e3de38a-9461-4b9e-a5b1-826e9966a3b2/volumes" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.659179 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.659318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrbw\" (UniqueName: \"kubernetes.io/projected/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-kube-api-access-4mrbw\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.659400 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-config-data\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.659473 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-scripts\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.659957 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.660269 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-log-httpd\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.660309 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-run-httpd\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.747464 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.763057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.763204 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-log-httpd\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.763242 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-run-httpd\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.763300 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.763322 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mrbw\" (UniqueName: \"kubernetes.io/projected/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-kube-api-access-4mrbw\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.763352 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-config-data\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.763379 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-scripts\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.764338 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-log-httpd\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.765939 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-run-httpd\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.771773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.773475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-scripts\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.774508 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.775494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-config-data\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.798217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mrbw\" (UniqueName: \"kubernetes.io/projected/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-kube-api-access-4mrbw\") pod \"ceilometer-0\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " pod="openstack/ceilometer-0" Feb 16 21:18:29 crc kubenswrapper[4805]: I0216 21:18:29.892004 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.353177 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:30 crc kubenswrapper[4805]: W0216 21:18:30.364316 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf702fe5f_7446_4d4e_bfa2_f8273a3cf2f6.slice/crio-db5a9f186a89018071f58c86318d3c25f552f37bc000aa0dd1818b3449c0a820 WatchSource:0}: Error finding container db5a9f186a89018071f58c86318d3c25f552f37bc000aa0dd1818b3449c0a820: Status 404 returned error can't find the container with id db5a9f186a89018071f58c86318d3c25f552f37bc000aa0dd1818b3449c0a820 Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.500391 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da","Type":"ContainerStarted","Data":"299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649"} Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.500646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da","Type":"ContainerStarted","Data":"65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949"} Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.500656 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da","Type":"ContainerStarted","Data":"61f68c201435e8acbcfd8af27357d35631627ca055e85c136a26c009d84cf901"} Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.502584 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerStarted","Data":"db5a9f186a89018071f58c86318d3c25f552f37bc000aa0dd1818b3449c0a820"} Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.504738 4805 generic.go:334] "Generic (PLEG): container finished" podID="521423fc-6efd-4f61-89f3-f1523eb8e9f5" containerID="6490e84da6532409dec05cfaae4b31e66b16ad62ad6caa714a2ffd4f6ea6c2d3" exitCode=0 Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.504794 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kw7z8" event={"ID":"521423fc-6efd-4f61-89f3-f1523eb8e9f5","Type":"ContainerDied","Data":"6490e84da6532409dec05cfaae4b31e66b16ad62ad6caa714a2ffd4f6ea6c2d3"} Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.525588 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.525567172 podStartE2EDuration="2.525567172s" podCreationTimestamp="2026-02-16 21:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:30.520340441 +0000 UTC m=+1328.339023756" watchObservedRunningTime="2026-02-16 21:18:30.525567172 +0000 UTC m=+1328.344250477" Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.618136 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.619338 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:18:30 crc kubenswrapper[4805]: I0216 21:18:30.946631 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.034880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.035283 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.092186 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.103932 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.196968 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-67lql"] Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.197246 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" podUID="dbf9f4f7-172d-4321-8294-dc697a17b360" containerName="dnsmasq-dns" containerID="cri-o://029f5def6e94a21aebf1a5a0a5eaa6411c824e627835aaac91fbec55d38f7ab4" gracePeriod=10 Feb 16 21:18:31 crc kubenswrapper[4805]: E0216 21:18:31.477881 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbf9f4f7_172d_4321_8294_dc697a17b360.slice/crio-029f5def6e94a21aebf1a5a0a5eaa6411c824e627835aaac91fbec55d38f7ab4.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.518378 4805 generic.go:334] "Generic (PLEG): container finished" podID="205f4efe-0a2d-4d28-a929-c89b671cefae" containerID="70c3496eaa3cdd95cb25af45b1d8d0d1dc143e9edb600988a1d0ede7d7095ac8" exitCode=0 Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.518487 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gxz95" event={"ID":"205f4efe-0a2d-4d28-a929-c89b671cefae","Type":"ContainerDied","Data":"70c3496eaa3cdd95cb25af45b1d8d0d1dc143e9edb600988a1d0ede7d7095ac8"} Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.524809 4805 generic.go:334] "Generic (PLEG): container finished" podID="dbf9f4f7-172d-4321-8294-dc697a17b360" containerID="029f5def6e94a21aebf1a5a0a5eaa6411c824e627835aaac91fbec55d38f7ab4" exitCode=0 Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.524874 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" event={"ID":"dbf9f4f7-172d-4321-8294-dc697a17b360","Type":"ContainerDied","Data":"029f5def6e94a21aebf1a5a0a5eaa6411c824e627835aaac91fbec55d38f7ab4"} Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.533286 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerStarted","Data":"a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7"} Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.591595 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.707088 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.235:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.709990 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.235:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.890304 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.923204 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbcqv\" (UniqueName: \"kubernetes.io/projected/dbf9f4f7-172d-4321-8294-dc697a17b360-kube-api-access-wbcqv\") pod \"dbf9f4f7-172d-4321-8294-dc697a17b360\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.923261 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-sb\") pod \"dbf9f4f7-172d-4321-8294-dc697a17b360\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.923375 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-swift-storage-0\") pod \"dbf9f4f7-172d-4321-8294-dc697a17b360\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.923398 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-config\") pod \"dbf9f4f7-172d-4321-8294-dc697a17b360\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.923489 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-nb\") pod \"dbf9f4f7-172d-4321-8294-dc697a17b360\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.923570 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-svc\") pod \"dbf9f4f7-172d-4321-8294-dc697a17b360\" (UID: \"dbf9f4f7-172d-4321-8294-dc697a17b360\") " Feb 16 21:18:31 crc kubenswrapper[4805]: I0216 21:18:31.932173 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf9f4f7-172d-4321-8294-dc697a17b360-kube-api-access-wbcqv" (OuterVolumeSpecName: "kube-api-access-wbcqv") pod "dbf9f4f7-172d-4321-8294-dc697a17b360" (UID: "dbf9f4f7-172d-4321-8294-dc697a17b360"). InnerVolumeSpecName "kube-api-access-wbcqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.026911 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbcqv\" (UniqueName: \"kubernetes.io/projected/dbf9f4f7-172d-4321-8294-dc697a17b360-kube-api-access-wbcqv\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.050288 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.105315 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-config" (OuterVolumeSpecName: "config") pod "dbf9f4f7-172d-4321-8294-dc697a17b360" (UID: "dbf9f4f7-172d-4321-8294-dc697a17b360"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.113118 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dbf9f4f7-172d-4321-8294-dc697a17b360" (UID: "dbf9f4f7-172d-4321-8294-dc697a17b360"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.128983 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-combined-ca-bundle\") pod \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.129045 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-scripts\") pod \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.129097 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-config-data\") pod \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.129123 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwtpg\" (UniqueName: \"kubernetes.io/projected/521423fc-6efd-4f61-89f3-f1523eb8e9f5-kube-api-access-rwtpg\") pod \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\" (UID: \"521423fc-6efd-4f61-89f3-f1523eb8e9f5\") " Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.129627 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.129645 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.133099 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-scripts" (OuterVolumeSpecName: "scripts") pod "521423fc-6efd-4f61-89f3-f1523eb8e9f5" (UID: "521423fc-6efd-4f61-89f3-f1523eb8e9f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.135218 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521423fc-6efd-4f61-89f3-f1523eb8e9f5-kube-api-access-rwtpg" (OuterVolumeSpecName: "kube-api-access-rwtpg") pod "521423fc-6efd-4f61-89f3-f1523eb8e9f5" (UID: "521423fc-6efd-4f61-89f3-f1523eb8e9f5"). InnerVolumeSpecName "kube-api-access-rwtpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.143371 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbf9f4f7-172d-4321-8294-dc697a17b360" (UID: "dbf9f4f7-172d-4321-8294-dc697a17b360"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.155428 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dbf9f4f7-172d-4321-8294-dc697a17b360" (UID: "dbf9f4f7-172d-4321-8294-dc697a17b360"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.159021 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dbf9f4f7-172d-4321-8294-dc697a17b360" (UID: "dbf9f4f7-172d-4321-8294-dc697a17b360"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.188967 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "521423fc-6efd-4f61-89f3-f1523eb8e9f5" (UID: "521423fc-6efd-4f61-89f3-f1523eb8e9f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.210845 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-config-data" (OuterVolumeSpecName: "config-data") pod "521423fc-6efd-4f61-89f3-f1523eb8e9f5" (UID: "521423fc-6efd-4f61-89f3-f1523eb8e9f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.232056 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.232084 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.232095 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.232103 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521423fc-6efd-4f61-89f3-f1523eb8e9f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.232111 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwtpg\" (UniqueName: \"kubernetes.io/projected/521423fc-6efd-4f61-89f3-f1523eb8e9f5-kube-api-access-rwtpg\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.232121 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.232128 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbf9f4f7-172d-4321-8294-dc697a17b360-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.548871 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerStarted","Data":"7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc"} Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.554878 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kw7z8" event={"ID":"521423fc-6efd-4f61-89f3-f1523eb8e9f5","Type":"ContainerDied","Data":"7c1655b18cf1cf85febf03e59077384d14a40fda2d6ca5a11ca369f14ebd316f"} Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.554915 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c1655b18cf1cf85febf03e59077384d14a40fda2d6ca5a11ca369f14ebd316f" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.554972 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kw7z8" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.562262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" event={"ID":"dbf9f4f7-172d-4321-8294-dc697a17b360","Type":"ContainerDied","Data":"8edac093ae3bfb06ebf73a7fd96b174bfb9f1b9b54a646ecca0763f926c8b644"} Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.562339 4805 scope.go:117] "RemoveContainer" containerID="029f5def6e94a21aebf1a5a0a5eaa6411c824e627835aaac91fbec55d38f7ab4" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.562574 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-67lql" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.620777 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-67lql"] Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.629841 4805 scope.go:117] "RemoveContainer" containerID="241d1074a5b0c4eadb147c9832904887a3f5e3d2385fa9e1ff68bd5adce179a1" Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.636083 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-67lql"] Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.739911 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.740149 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-log" containerID="cri-o://4052690266f057d6cdf2956d14aa0bf7a4e339bae7030ee61237210f5d00874b" gracePeriod=30 Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.740630 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-api" containerID="cri-o://6278f140e04d5ac7be41f220b3a50ae372905e7f56a8b3e2bfa0d1c5614d57e7" gracePeriod=30 Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.767072 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.791123 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.791491 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerName="nova-metadata-log" containerID="cri-o://65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949" gracePeriod=30 Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.791827 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerName="nova-metadata-metadata" containerID="cri-o://299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649" gracePeriod=30 Feb 16 21:18:32 crc kubenswrapper[4805]: I0216 21:18:32.955038 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.068818 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-scripts\") pod \"205f4efe-0a2d-4d28-a929-c89b671cefae\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.068914 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-combined-ca-bundle\") pod \"205f4efe-0a2d-4d28-a929-c89b671cefae\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.069021 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bxtj\" (UniqueName: \"kubernetes.io/projected/205f4efe-0a2d-4d28-a929-c89b671cefae-kube-api-access-8bxtj\") pod \"205f4efe-0a2d-4d28-a929-c89b671cefae\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.069072 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-config-data\") pod \"205f4efe-0a2d-4d28-a929-c89b671cefae\" (UID: \"205f4efe-0a2d-4d28-a929-c89b671cefae\") " Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.076903 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-scripts" (OuterVolumeSpecName: "scripts") pod "205f4efe-0a2d-4d28-a929-c89b671cefae" (UID: "205f4efe-0a2d-4d28-a929-c89b671cefae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.078877 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205f4efe-0a2d-4d28-a929-c89b671cefae-kube-api-access-8bxtj" (OuterVolumeSpecName: "kube-api-access-8bxtj") pod "205f4efe-0a2d-4d28-a929-c89b671cefae" (UID: "205f4efe-0a2d-4d28-a929-c89b671cefae"). InnerVolumeSpecName "kube-api-access-8bxtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.100691 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-config-data" (OuterVolumeSpecName: "config-data") pod "205f4efe-0a2d-4d28-a929-c89b671cefae" (UID: "205f4efe-0a2d-4d28-a929-c89b671cefae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.120451 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "205f4efe-0a2d-4d28-a929-c89b671cefae" (UID: "205f4efe-0a2d-4d28-a929-c89b671cefae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.172036 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bxtj\" (UniqueName: \"kubernetes.io/projected/205f4efe-0a2d-4d28-a929-c89b671cefae-kube-api-access-8bxtj\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.172062 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.172071 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.172081 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205f4efe-0a2d-4d28-a929-c89b671cefae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.457946 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.579230 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-config-data\") pod \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.579313 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-nova-metadata-tls-certs\") pod \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.579371 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-combined-ca-bundle\") pod \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.579397 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddrvh\" (UniqueName: \"kubernetes.io/projected/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-kube-api-access-ddrvh\") pod \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.579488 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-logs\") pod \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\" (UID: \"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da\") " Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.579520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gxz95" event={"ID":"205f4efe-0a2d-4d28-a929-c89b671cefae","Type":"ContainerDied","Data":"a686473fe1126b7ee3a0607e3ac4c2cb909389c29e27f4d2bcf130e88c846b71"} Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.579555 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a686473fe1126b7ee3a0607e3ac4c2cb909389c29e27f4d2bcf130e88c846b71" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.579605 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.580684 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-logs" (OuterVolumeSpecName: "logs") pod "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" (UID: "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.595870 4805 generic.go:334] "Generic (PLEG): container finished" podID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerID="299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649" exitCode=0 Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.595901 4805 generic.go:334] "Generic (PLEG): container finished" podID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerID="65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949" exitCode=143 Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.595938 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da","Type":"ContainerDied","Data":"299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649"} Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.595963 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da","Type":"ContainerDied","Data":"65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949"} Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.595973 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da","Type":"ContainerDied","Data":"61f68c201435e8acbcfd8af27357d35631627ca055e85c136a26c009d84cf901"} Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.595988 4805 scope.go:117] "RemoveContainer" containerID="299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.596092 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.604119 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-kube-api-access-ddrvh" (OuterVolumeSpecName: "kube-api-access-ddrvh") pod "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" (UID: "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da"). InnerVolumeSpecName "kube-api-access-ddrvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.626909 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf9f4f7-172d-4321-8294-dc697a17b360" path="/var/lib/kubelet/pods/dbf9f4f7-172d-4321-8294-dc697a17b360/volumes" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.646424 4805 generic.go:334] "Generic (PLEG): container finished" podID="c970e977-c22d-46d1-9062-37981c5302dc" containerID="4052690266f057d6cdf2956d14aa0bf7a4e339bae7030ee61237210f5d00874b" exitCode=143 Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.647143 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-config-data" (OuterVolumeSpecName: "config-data") pod "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" (UID: "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.648304 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ae7106d0-bb33-4111-8ea3-9e9e149b5cb0" containerName="nova-scheduler-scheduler" containerID="cri-o://a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24" gracePeriod=30 Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.682622 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.682884 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddrvh\" (UniqueName: \"kubernetes.io/projected/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-kube-api-access-ddrvh\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.682947 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.683840 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" (UID: "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.746197 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" (UID: "6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.770446 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerStarted","Data":"4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1"} Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.770818 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 21:18:33 crc kubenswrapper[4805]: E0216 21:18:33.772311 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerName="nova-metadata-metadata" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.772339 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerName="nova-metadata-metadata" Feb 16 21:18:33 crc kubenswrapper[4805]: E0216 21:18:33.772391 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521423fc-6efd-4f61-89f3-f1523eb8e9f5" containerName="nova-manage" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.772401 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="521423fc-6efd-4f61-89f3-f1523eb8e9f5" containerName="nova-manage" Feb 16 21:18:33 crc kubenswrapper[4805]: E0216 21:18:33.772427 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf9f4f7-172d-4321-8294-dc697a17b360" containerName="dnsmasq-dns" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.772436 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf9f4f7-172d-4321-8294-dc697a17b360" containerName="dnsmasq-dns" Feb 16 21:18:33 crc kubenswrapper[4805]: E0216 21:18:33.772462 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf9f4f7-172d-4321-8294-dc697a17b360" containerName="init" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.772492 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf9f4f7-172d-4321-8294-dc697a17b360" containerName="init" Feb 16 21:18:33 crc kubenswrapper[4805]: E0216 21:18:33.772511 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205f4efe-0a2d-4d28-a929-c89b671cefae" containerName="nova-cell1-conductor-db-sync" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.772523 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="205f4efe-0a2d-4d28-a929-c89b671cefae" containerName="nova-cell1-conductor-db-sync" Feb 16 21:18:33 crc kubenswrapper[4805]: E0216 21:18:33.772548 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerName="nova-metadata-log" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.772556 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerName="nova-metadata-log" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.773201 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerName="nova-metadata-log" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.773238 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" containerName="nova-metadata-metadata" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.773297 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf9f4f7-172d-4321-8294-dc697a17b360" containerName="dnsmasq-dns" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.773319 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="205f4efe-0a2d-4d28-a929-c89b671cefae" containerName="nova-cell1-conductor-db-sync" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.773340 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="521423fc-6efd-4f61-89f3-f1523eb8e9f5" containerName="nova-manage" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.774454 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.774486 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-mtghv"] Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.776978 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-mtghv"] Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.777032 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c970e977-c22d-46d1-9062-37981c5302dc","Type":"ContainerDied","Data":"4052690266f057d6cdf2956d14aa0bf7a4e339bae7030ee61237210f5d00874b"} Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.777091 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-d63c-account-create-update-qrtgj"] Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.779384 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.780174 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.780594 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.782341 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.782631 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.793737 4805 scope.go:117] "RemoveContainer" containerID="65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.795693 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.795740 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.798809 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d63c-account-create-update-qrtgj"] Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.841105 4805 scope.go:117] "RemoveContainer" containerID="299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649" Feb 16 21:18:33 crc kubenswrapper[4805]: E0216 21:18:33.841589 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649\": container with ID starting with 299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649 not found: ID does not exist" containerID="299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.841623 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649"} err="failed to get container status \"299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649\": rpc error: code = NotFound desc = could not find container \"299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649\": container with ID starting with 299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649 not found: ID does not exist" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.841645 4805 scope.go:117] "RemoveContainer" containerID="65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949" Feb 16 21:18:33 crc kubenswrapper[4805]: E0216 21:18:33.842044 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949\": container with ID starting with 65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949 not found: ID does not exist" containerID="65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.842073 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949"} err="failed to get container status \"65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949\": rpc error: code = NotFound desc = could not find container \"65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949\": container with ID starting with 65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949 not found: ID does not exist" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.842087 4805 scope.go:117] "RemoveContainer" containerID="299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.842328 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649"} err="failed to get container status \"299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649\": rpc error: code = NotFound desc = could not find container \"299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649\": container with ID starting with 299e1ca1c052659f18744ac33ccc55229f3924413bfcaa571b23704b231cb649 not found: ID does not exist" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.842353 4805 scope.go:117] "RemoveContainer" containerID="65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.842531 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949"} err="failed to get container status \"65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949\": rpc error: code = NotFound desc = could not find container \"65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949\": container with ID starting with 65f829dc1b320e4b4541401467a78360032fcca3a8f86c629b159228930f8949 not found: ID does not exist" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.898250 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973a9e10-9520-4eea-90d8-2e52e480d949-operator-scripts\") pod \"aodh-db-create-mtghv\" (UID: \"973a9e10-9520-4eea-90d8-2e52e480d949\") " pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.898301 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3312fbc-9e01-40d8-b648-89d1c8747aad-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f3312fbc-9e01-40d8-b648-89d1c8747aad\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.898331 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-operator-scripts\") pod \"aodh-d63c-account-create-update-qrtgj\" (UID: \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\") " pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.898579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpc8b\" (UniqueName: \"kubernetes.io/projected/f3312fbc-9e01-40d8-b648-89d1c8747aad-kube-api-access-cpc8b\") pod \"nova-cell1-conductor-0\" (UID: \"f3312fbc-9e01-40d8-b648-89d1c8747aad\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.898831 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8bn\" (UniqueName: \"kubernetes.io/projected/973a9e10-9520-4eea-90d8-2e52e480d949-kube-api-access-wp8bn\") pod \"aodh-db-create-mtghv\" (UID: \"973a9e10-9520-4eea-90d8-2e52e480d949\") " pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.898863 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5qj6\" (UniqueName: \"kubernetes.io/projected/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-kube-api-access-t5qj6\") pod \"aodh-d63c-account-create-update-qrtgj\" (UID: \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\") " pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.898979 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3312fbc-9e01-40d8-b648-89d1c8747aad-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f3312fbc-9e01-40d8-b648-89d1c8747aad\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.971354 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:33 crc kubenswrapper[4805]: I0216 21:18:33.984648 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.000159 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.002187 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973a9e10-9520-4eea-90d8-2e52e480d949-operator-scripts\") pod \"aodh-db-create-mtghv\" (UID: \"973a9e10-9520-4eea-90d8-2e52e480d949\") " pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.002244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3312fbc-9e01-40d8-b648-89d1c8747aad-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f3312fbc-9e01-40d8-b648-89d1c8747aad\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.002267 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-operator-scripts\") pod \"aodh-d63c-account-create-update-qrtgj\" (UID: \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\") " pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.002364 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpc8b\" (UniqueName: \"kubernetes.io/projected/f3312fbc-9e01-40d8-b648-89d1c8747aad-kube-api-access-cpc8b\") pod \"nova-cell1-conductor-0\" (UID: \"f3312fbc-9e01-40d8-b648-89d1c8747aad\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.002455 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp8bn\" (UniqueName: \"kubernetes.io/projected/973a9e10-9520-4eea-90d8-2e52e480d949-kube-api-access-wp8bn\") pod \"aodh-db-create-mtghv\" (UID: \"973a9e10-9520-4eea-90d8-2e52e480d949\") " pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.002488 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5qj6\" (UniqueName: \"kubernetes.io/projected/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-kube-api-access-t5qj6\") pod \"aodh-d63c-account-create-update-qrtgj\" (UID: \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\") " pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.002704 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3312fbc-9e01-40d8-b648-89d1c8747aad-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f3312fbc-9e01-40d8-b648-89d1c8747aad\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.003290 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.006017 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-operator-scripts\") pod \"aodh-d63c-account-create-update-qrtgj\" (UID: \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\") " pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.006355 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.008606 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973a9e10-9520-4eea-90d8-2e52e480d949-operator-scripts\") pod \"aodh-db-create-mtghv\" (UID: \"973a9e10-9520-4eea-90d8-2e52e480d949\") " pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.009381 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.018086 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3312fbc-9e01-40d8-b648-89d1c8747aad-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f3312fbc-9e01-40d8-b648-89d1c8747aad\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.020567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3312fbc-9e01-40d8-b648-89d1c8747aad-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f3312fbc-9e01-40d8-b648-89d1c8747aad\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.020749 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp8bn\" (UniqueName: \"kubernetes.io/projected/973a9e10-9520-4eea-90d8-2e52e480d949-kube-api-access-wp8bn\") pod \"aodh-db-create-mtghv\" (UID: \"973a9e10-9520-4eea-90d8-2e52e480d949\") " pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.025830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpc8b\" (UniqueName: \"kubernetes.io/projected/f3312fbc-9e01-40d8-b648-89d1c8747aad-kube-api-access-cpc8b\") pod \"nova-cell1-conductor-0\" (UID: \"f3312fbc-9e01-40d8-b648-89d1c8747aad\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.030154 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.036706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5qj6\" (UniqueName: \"kubernetes.io/projected/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-kube-api-access-t5qj6\") pod \"aodh-d63c-account-create-update-qrtgj\" (UID: \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\") " pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.104598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxlqm\" (UniqueName: \"kubernetes.io/projected/1640d816-9924-451c-b2fd-21abd0975ef8-kube-api-access-mxlqm\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.105108 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640d816-9924-451c-b2fd-21abd0975ef8-logs\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.105264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.105467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.105563 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-config-data\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.113092 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.138224 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.141326 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.207220 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.207263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-config-data\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.207364 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxlqm\" (UniqueName: \"kubernetes.io/projected/1640d816-9924-451c-b2fd-21abd0975ef8-kube-api-access-mxlqm\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.207405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640d816-9924-451c-b2fd-21abd0975ef8-logs\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.207425 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.208091 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640d816-9924-451c-b2fd-21abd0975ef8-logs\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.211112 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.214767 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.215124 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-config-data\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.224821 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxlqm\" (UniqueName: \"kubernetes.io/projected/1640d816-9924-451c-b2fd-21abd0975ef8-kube-api-access-mxlqm\") pod \"nova-metadata-0\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.338218 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.869600 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d63c-account-create-update-qrtgj"] Feb 16 21:18:34 crc kubenswrapper[4805]: W0216 21:18:34.891186 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3312fbc_9e01_40d8_b648_89d1c8747aad.slice/crio-8511146f92ccf790b98f6fdecbd03d50e466c65ee6c78d1cefd7369a70d42c55 WatchSource:0}: Error finding container 8511146f92ccf790b98f6fdecbd03d50e466c65ee6c78d1cefd7369a70d42c55: Status 404 returned error can't find the container with id 8511146f92ccf790b98f6fdecbd03d50e466c65ee6c78d1cefd7369a70d42c55 Feb 16 21:18:34 crc kubenswrapper[4805]: I0216 21:18:34.891962 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 21:18:35 crc kubenswrapper[4805]: W0216 21:18:35.125590 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1640d816_9924_451c_b2fd_21abd0975ef8.slice/crio-dff16f4207f2da99ff749d5970e05751cd27589be1551c35a4d7ec11d69e27af WatchSource:0}: Error finding container dff16f4207f2da99ff749d5970e05751cd27589be1551c35a4d7ec11d69e27af: Status 404 returned error can't find the container with id dff16f4207f2da99ff749d5970e05751cd27589be1551c35a4d7ec11d69e27af Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.130788 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.138042 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-mtghv"] Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.607512 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.613595 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da" path="/var/lib/kubelet/pods/6aefb5ce-8e7b-4ab7-b68a-2397c4fc27da/volumes" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.717182 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1640d816-9924-451c-b2fd-21abd0975ef8","Type":"ContainerStarted","Data":"5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.717221 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1640d816-9924-451c-b2fd-21abd0975ef8","Type":"ContainerStarted","Data":"dff16f4207f2da99ff749d5970e05751cd27589be1551c35a4d7ec11d69e27af"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.720661 4805 generic.go:334] "Generic (PLEG): container finished" podID="ae7106d0-bb33-4111-8ea3-9e9e149b5cb0" containerID="a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24" exitCode=0 Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.720736 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.720780 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0","Type":"ContainerDied","Data":"a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.720807 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0","Type":"ContainerDied","Data":"695bd2bacb48c5cd55b4fd9e806c09a6705895601537272d5c554d92d3de95d2"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.720823 4805 scope.go:117] "RemoveContainer" containerID="a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.726132 4805 generic.go:334] "Generic (PLEG): container finished" podID="d8f5353d-4097-41a1-83fe-7f7747ed9fb7" containerID="0d413b93ba5891014f531951463cb7133080be22fbbfbd50448026e5ca7535ba" exitCode=0 Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.726202 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d63c-account-create-update-qrtgj" event={"ID":"d8f5353d-4097-41a1-83fe-7f7747ed9fb7","Type":"ContainerDied","Data":"0d413b93ba5891014f531951463cb7133080be22fbbfbd50448026e5ca7535ba"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.726238 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d63c-account-create-update-qrtgj" event={"ID":"d8f5353d-4097-41a1-83fe-7f7747ed9fb7","Type":"ContainerStarted","Data":"09e06572d293067b7c90e5ca27ca7ad311d4ad6198edd26798b1b50ce30d3117"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.731525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f3312fbc-9e01-40d8-b648-89d1c8747aad","Type":"ContainerStarted","Data":"1d59247282805000fc73a397f1f2c1506bb3b5cf46e15fbac5f2a87266453ac2"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.731578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f3312fbc-9e01-40d8-b648-89d1c8747aad","Type":"ContainerStarted","Data":"8511146f92ccf790b98f6fdecbd03d50e466c65ee6c78d1cefd7369a70d42c55"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.732515 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.734891 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerStarted","Data":"c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.735102 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.740032 4805 generic.go:334] "Generic (PLEG): container finished" podID="973a9e10-9520-4eea-90d8-2e52e480d949" containerID="d975fe4b41131b7630b710a3d9128f93ab327a1ac6bd19d0e467d51e731d6c79" exitCode=0 Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.740064 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mtghv" event={"ID":"973a9e10-9520-4eea-90d8-2e52e480d949","Type":"ContainerDied","Data":"d975fe4b41131b7630b710a3d9128f93ab327a1ac6bd19d0e467d51e731d6c79"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.740080 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mtghv" event={"ID":"973a9e10-9520-4eea-90d8-2e52e480d949","Type":"ContainerStarted","Data":"b9bad83fb35f479abce782a8ad21aedb4f3534e6c9969a1bf17c19ef2e2e5fe5"} Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.769493 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4pfq\" (UniqueName: \"kubernetes.io/projected/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-kube-api-access-s4pfq\") pod \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.769867 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-combined-ca-bundle\") pod \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.769893 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-config-data\") pod \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\" (UID: \"ae7106d0-bb33-4111-8ea3-9e9e149b5cb0\") " Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.775917 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-kube-api-access-s4pfq" (OuterVolumeSpecName: "kube-api-access-s4pfq") pod "ae7106d0-bb33-4111-8ea3-9e9e149b5cb0" (UID: "ae7106d0-bb33-4111-8ea3-9e9e149b5cb0"). InnerVolumeSpecName "kube-api-access-s4pfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.790642 4805 scope.go:117] "RemoveContainer" containerID="a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24" Feb 16 21:18:35 crc kubenswrapper[4805]: E0216 21:18:35.796898 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24\": container with ID starting with a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24 not found: ID does not exist" containerID="a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.796938 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24"} err="failed to get container status \"a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24\": rpc error: code = NotFound desc = could not find container \"a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24\": container with ID starting with a4800b918d3d64082ce5c525209499c8a9dacdf4d7de2c8372506d4e7453bf24 not found: ID does not exist" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.829566 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.8295356099999998 podStartE2EDuration="2.82953561s" podCreationTimestamp="2026-02-16 21:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:35.790854422 +0000 UTC m=+1333.609537717" watchObservedRunningTime="2026-02-16 21:18:35.82953561 +0000 UTC m=+1333.648218905" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.856865 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-config-data" (OuterVolumeSpecName: "config-data") pod "ae7106d0-bb33-4111-8ea3-9e9e149b5cb0" (UID: "ae7106d0-bb33-4111-8ea3-9e9e149b5cb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.857498 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae7106d0-bb33-4111-8ea3-9e9e149b5cb0" (UID: "ae7106d0-bb33-4111-8ea3-9e9e149b5cb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.861735 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.913941972 podStartE2EDuration="6.861658253s" podCreationTimestamp="2026-02-16 21:18:29 +0000 UTC" firstStartedPulling="2026-02-16 21:18:30.366957203 +0000 UTC m=+1328.185640498" lastFinishedPulling="2026-02-16 21:18:34.314673484 +0000 UTC m=+1332.133356779" observedRunningTime="2026-02-16 21:18:35.812361569 +0000 UTC m=+1333.631044864" watchObservedRunningTime="2026-02-16 21:18:35.861658253 +0000 UTC m=+1333.680341538" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.874288 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.874313 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:35 crc kubenswrapper[4805]: I0216 21:18:35.874324 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4pfq\" (UniqueName: \"kubernetes.io/projected/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0-kube-api-access-s4pfq\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.054801 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.080168 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.115890 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:18:36 crc kubenswrapper[4805]: E0216 21:18:36.116486 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae7106d0-bb33-4111-8ea3-9e9e149b5cb0" containerName="nova-scheduler-scheduler" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.116509 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae7106d0-bb33-4111-8ea3-9e9e149b5cb0" containerName="nova-scheduler-scheduler" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.116778 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae7106d0-bb33-4111-8ea3-9e9e149b5cb0" containerName="nova-scheduler-scheduler" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.117637 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.120039 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.127574 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.284282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5ffz\" (UniqueName: \"kubernetes.io/projected/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-kube-api-access-g5ffz\") pod \"nova-scheduler-0\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.284744 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.284975 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-config-data\") pod \"nova-scheduler-0\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.388063 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5ffz\" (UniqueName: \"kubernetes.io/projected/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-kube-api-access-g5ffz\") pod \"nova-scheduler-0\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.388575 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.389447 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-config-data\") pod \"nova-scheduler-0\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.396200 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.397068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-config-data\") pod \"nova-scheduler-0\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.415515 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5ffz\" (UniqueName: \"kubernetes.io/projected/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-kube-api-access-g5ffz\") pod \"nova-scheduler-0\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.432648 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.755000 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1640d816-9924-451c-b2fd-21abd0975ef8","Type":"ContainerStarted","Data":"fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c"} Feb 16 21:18:36 crc kubenswrapper[4805]: I0216 21:18:36.795980 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.795963161 podStartE2EDuration="3.795963161s" podCreationTimestamp="2026-02-16 21:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:36.780485825 +0000 UTC m=+1334.599169120" watchObservedRunningTime="2026-02-16 21:18:36.795963161 +0000 UTC m=+1334.614646456" Feb 16 21:18:37 crc kubenswrapper[4805]: W0216 21:18:37.012295 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92697e5a_1dd1_40ea_9b55_82b01bef5a3f.slice/crio-85fd1f28cd17d6a90460df1a7df8af7b8adcb7eab350bb329c24c7b2287ed959 WatchSource:0}: Error finding container 85fd1f28cd17d6a90460df1a7df8af7b8adcb7eab350bb329c24c7b2287ed959: Status 404 returned error can't find the container with id 85fd1f28cd17d6a90460df1a7df8af7b8adcb7eab350bb329c24c7b2287ed959 Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.022160 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.270518 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.434953 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5qj6\" (UniqueName: \"kubernetes.io/projected/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-kube-api-access-t5qj6\") pod \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\" (UID: \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\") " Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.435046 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-operator-scripts\") pod \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\" (UID: \"d8f5353d-4097-41a1-83fe-7f7747ed9fb7\") " Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.436193 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8f5353d-4097-41a1-83fe-7f7747ed9fb7" (UID: "d8f5353d-4097-41a1-83fe-7f7747ed9fb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.450908 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-kube-api-access-t5qj6" (OuterVolumeSpecName: "kube-api-access-t5qj6") pod "d8f5353d-4097-41a1-83fe-7f7747ed9fb7" (UID: "d8f5353d-4097-41a1-83fe-7f7747ed9fb7"). InnerVolumeSpecName "kube-api-access-t5qj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.538745 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5qj6\" (UniqueName: \"kubernetes.io/projected/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-kube-api-access-t5qj6\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.538783 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8f5353d-4097-41a1-83fe-7f7747ed9fb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.546661 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.625909 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae7106d0-bb33-4111-8ea3-9e9e149b5cb0" path="/var/lib/kubelet/pods/ae7106d0-bb33-4111-8ea3-9e9e149b5cb0/volumes" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.741849 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973a9e10-9520-4eea-90d8-2e52e480d949-operator-scripts\") pod \"973a9e10-9520-4eea-90d8-2e52e480d949\" (UID: \"973a9e10-9520-4eea-90d8-2e52e480d949\") " Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.741995 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp8bn\" (UniqueName: \"kubernetes.io/projected/973a9e10-9520-4eea-90d8-2e52e480d949-kube-api-access-wp8bn\") pod \"973a9e10-9520-4eea-90d8-2e52e480d949\" (UID: \"973a9e10-9520-4eea-90d8-2e52e480d949\") " Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.744049 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973a9e10-9520-4eea-90d8-2e52e480d949-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "973a9e10-9520-4eea-90d8-2e52e480d949" (UID: "973a9e10-9520-4eea-90d8-2e52e480d949"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.751015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973a9e10-9520-4eea-90d8-2e52e480d949-kube-api-access-wp8bn" (OuterVolumeSpecName: "kube-api-access-wp8bn") pod "973a9e10-9520-4eea-90d8-2e52e480d949" (UID: "973a9e10-9520-4eea-90d8-2e52e480d949"). InnerVolumeSpecName "kube-api-access-wp8bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.782747 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d63c-account-create-update-qrtgj" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.782769 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d63c-account-create-update-qrtgj" event={"ID":"d8f5353d-4097-41a1-83fe-7f7747ed9fb7","Type":"ContainerDied","Data":"09e06572d293067b7c90e5ca27ca7ad311d4ad6198edd26798b1b50ce30d3117"} Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.782870 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e06572d293067b7c90e5ca27ca7ad311d4ad6198edd26798b1b50ce30d3117" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.786372 4805 generic.go:334] "Generic (PLEG): container finished" podID="c970e977-c22d-46d1-9062-37981c5302dc" containerID="6278f140e04d5ac7be41f220b3a50ae372905e7f56a8b3e2bfa0d1c5614d57e7" exitCode=0 Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.786459 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c970e977-c22d-46d1-9062-37981c5302dc","Type":"ContainerDied","Data":"6278f140e04d5ac7be41f220b3a50ae372905e7f56a8b3e2bfa0d1c5614d57e7"} Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.786491 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c970e977-c22d-46d1-9062-37981c5302dc","Type":"ContainerDied","Data":"c22cf7a97e91b5cdccd5936fa7b188a6d3d7ae56d0e3b216d8792dcee3115517"} Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.786505 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c22cf7a97e91b5cdccd5936fa7b188a6d3d7ae56d0e3b216d8792dcee3115517" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.789000 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mtghv" event={"ID":"973a9e10-9520-4eea-90d8-2e52e480d949","Type":"ContainerDied","Data":"b9bad83fb35f479abce782a8ad21aedb4f3534e6c9969a1bf17c19ef2e2e5fe5"} Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.789028 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9bad83fb35f479abce782a8ad21aedb4f3534e6c9969a1bf17c19ef2e2e5fe5" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.789069 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mtghv" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.793497 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92697e5a-1dd1-40ea-9b55-82b01bef5a3f","Type":"ContainerStarted","Data":"5b533119dc1f8d0ff94edd0debad63b9c4fc792009f0cbb74cbc51e2f4f11beb"} Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.793530 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92697e5a-1dd1-40ea-9b55-82b01bef5a3f","Type":"ContainerStarted","Data":"85fd1f28cd17d6a90460df1a7df8af7b8adcb7eab350bb329c24c7b2287ed959"} Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.822814 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.831330 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.831311801 podStartE2EDuration="1.831311801s" podCreationTimestamp="2026-02-16 21:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:37.819097953 +0000 UTC m=+1335.637781268" watchObservedRunningTime="2026-02-16 21:18:37.831311801 +0000 UTC m=+1335.649995096" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.844973 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973a9e10-9520-4eea-90d8-2e52e480d949-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.845155 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp8bn\" (UniqueName: \"kubernetes.io/projected/973a9e10-9520-4eea-90d8-2e52e480d949-kube-api-access-wp8bn\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.946571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-combined-ca-bundle\") pod \"c970e977-c22d-46d1-9062-37981c5302dc\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.946651 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvt5s\" (UniqueName: \"kubernetes.io/projected/c970e977-c22d-46d1-9062-37981c5302dc-kube-api-access-nvt5s\") pod \"c970e977-c22d-46d1-9062-37981c5302dc\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.946775 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-config-data\") pod \"c970e977-c22d-46d1-9062-37981c5302dc\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.946852 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c970e977-c22d-46d1-9062-37981c5302dc-logs\") pod \"c970e977-c22d-46d1-9062-37981c5302dc\" (UID: \"c970e977-c22d-46d1-9062-37981c5302dc\") " Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.948343 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c970e977-c22d-46d1-9062-37981c5302dc-logs" (OuterVolumeSpecName: "logs") pod "c970e977-c22d-46d1-9062-37981c5302dc" (UID: "c970e977-c22d-46d1-9062-37981c5302dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.955861 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c970e977-c22d-46d1-9062-37981c5302dc-kube-api-access-nvt5s" (OuterVolumeSpecName: "kube-api-access-nvt5s") pod "c970e977-c22d-46d1-9062-37981c5302dc" (UID: "c970e977-c22d-46d1-9062-37981c5302dc"). InnerVolumeSpecName "kube-api-access-nvt5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.982569 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-config-data" (OuterVolumeSpecName: "config-data") pod "c970e977-c22d-46d1-9062-37981c5302dc" (UID: "c970e977-c22d-46d1-9062-37981c5302dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:37 crc kubenswrapper[4805]: I0216 21:18:37.982911 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c970e977-c22d-46d1-9062-37981c5302dc" (UID: "c970e977-c22d-46d1-9062-37981c5302dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.059740 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.059772 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvt5s\" (UniqueName: \"kubernetes.io/projected/c970e977-c22d-46d1-9062-37981c5302dc-kube-api-access-nvt5s\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.059784 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c970e977-c22d-46d1-9062-37981c5302dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.059796 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c970e977-c22d-46d1-9062-37981c5302dc-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.803609 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.869808 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.911440 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.932500 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:18:38 crc kubenswrapper[4805]: E0216 21:18:38.933112 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-api" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.933133 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-api" Feb 16 21:18:38 crc kubenswrapper[4805]: E0216 21:18:38.933156 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f5353d-4097-41a1-83fe-7f7747ed9fb7" containerName="mariadb-account-create-update" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.933165 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f5353d-4097-41a1-83fe-7f7747ed9fb7" containerName="mariadb-account-create-update" Feb 16 21:18:38 crc kubenswrapper[4805]: E0216 21:18:38.933204 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-log" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.933213 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-log" Feb 16 21:18:38 crc kubenswrapper[4805]: E0216 21:18:38.933231 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973a9e10-9520-4eea-90d8-2e52e480d949" containerName="mariadb-database-create" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.933239 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="973a9e10-9520-4eea-90d8-2e52e480d949" containerName="mariadb-database-create" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.933508 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-log" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.933545 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f5353d-4097-41a1-83fe-7f7747ed9fb7" containerName="mariadb-account-create-update" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.933561 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="973a9e10-9520-4eea-90d8-2e52e480d949" containerName="mariadb-database-create" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.933583 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c970e977-c22d-46d1-9062-37981c5302dc" containerName="nova-api-api" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.935028 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.937401 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:18:38 crc kubenswrapper[4805]: I0216 21:18:38.951283 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.097499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-config-data\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.097754 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10779c3-e063-42aa-9d58-0c5687bc0dfc-logs\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.097821 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lmzn\" (UniqueName: \"kubernetes.io/projected/a10779c3-e063-42aa-9d58-0c5687bc0dfc-kube-api-access-5lmzn\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.097946 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.103604 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-xnz5m"] Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.106090 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.118556 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-xnz5m"] Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.146320 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.146399 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.146585 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-8mrcb" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.148347 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.200005 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-combined-ca-bundle\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.200147 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-config-data\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.200200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-scripts\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.200248 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-config-data\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.200280 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10779c3-e063-42aa-9d58-0c5687bc0dfc-logs\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.200314 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lmzn\" (UniqueName: \"kubernetes.io/projected/a10779c3-e063-42aa-9d58-0c5687bc0dfc-kube-api-access-5lmzn\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.200356 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwlp\" (UniqueName: \"kubernetes.io/projected/c8f843e3-43b6-405f-84be-dccbf9dbceac-kube-api-access-9rwlp\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.200429 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.200710 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10779c3-e063-42aa-9d58-0c5687bc0dfc-logs\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.206191 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.207430 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-config-data\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.218112 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lmzn\" (UniqueName: \"kubernetes.io/projected/a10779c3-e063-42aa-9d58-0c5687bc0dfc-kube-api-access-5lmzn\") pod \"nova-api-0\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.264323 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.302471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-combined-ca-bundle\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.302650 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-scripts\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.302703 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-config-data\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.302784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rwlp\" (UniqueName: \"kubernetes.io/projected/c8f843e3-43b6-405f-84be-dccbf9dbceac-kube-api-access-9rwlp\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.309414 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-scripts\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.311662 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-combined-ca-bundle\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.312071 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-config-data\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.331766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rwlp\" (UniqueName: \"kubernetes.io/projected/c8f843e3-43b6-405f-84be-dccbf9dbceac-kube-api-access-9rwlp\") pod \"aodh-db-sync-xnz5m\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.339212 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.339318 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.463018 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.622346 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c970e977-c22d-46d1-9062-37981c5302dc" path="/var/lib/kubelet/pods/c970e977-c22d-46d1-9062-37981c5302dc/volumes" Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.766375 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.814713 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10779c3-e063-42aa-9d58-0c5687bc0dfc","Type":"ContainerStarted","Data":"7b9dbab52e8703e81473cdf84583f56e063de3c59573503585a633af1c3c7922"} Feb 16 21:18:39 crc kubenswrapper[4805]: I0216 21:18:39.968904 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-xnz5m"] Feb 16 21:18:40 crc kubenswrapper[4805]: I0216 21:18:40.826956 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xnz5m" event={"ID":"c8f843e3-43b6-405f-84be-dccbf9dbceac","Type":"ContainerStarted","Data":"748db38cd029cbb1c7c2cb293825147c20fd5f8a5636510284d4519c6ffb5447"} Feb 16 21:18:40 crc kubenswrapper[4805]: I0216 21:18:40.831378 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10779c3-e063-42aa-9d58-0c5687bc0dfc","Type":"ContainerStarted","Data":"e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb"} Feb 16 21:18:40 crc kubenswrapper[4805]: I0216 21:18:40.831512 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10779c3-e063-42aa-9d58-0c5687bc0dfc","Type":"ContainerStarted","Data":"35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385"} Feb 16 21:18:40 crc kubenswrapper[4805]: I0216 21:18:40.864359 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.864340402 podStartE2EDuration="2.864340402s" podCreationTimestamp="2026-02-16 21:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:18:40.851073166 +0000 UTC m=+1338.669756501" watchObservedRunningTime="2026-02-16 21:18:40.864340402 +0000 UTC m=+1338.683023697" Feb 16 21:18:41 crc kubenswrapper[4805]: I0216 21:18:41.434335 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 21:18:44 crc kubenswrapper[4805]: I0216 21:18:44.204929 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 21:18:44 crc kubenswrapper[4805]: I0216 21:18:44.339993 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:18:44 crc kubenswrapper[4805]: I0216 21:18:44.340482 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:18:44 crc kubenswrapper[4805]: I0216 21:18:44.892539 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xnz5m" event={"ID":"c8f843e3-43b6-405f-84be-dccbf9dbceac","Type":"ContainerStarted","Data":"0a0f156a551a55e047b6b01dcf925d2bd2fcddb4120b207e38714ce69383c852"} Feb 16 21:18:44 crc kubenswrapper[4805]: I0216 21:18:44.909367 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-xnz5m" podStartSLOduration=1.930775645 podStartE2EDuration="5.909352836s" podCreationTimestamp="2026-02-16 21:18:39 +0000 UTC" firstStartedPulling="2026-02-16 21:18:39.985107473 +0000 UTC m=+1337.803790768" lastFinishedPulling="2026-02-16 21:18:43.963684654 +0000 UTC m=+1341.782367959" observedRunningTime="2026-02-16 21:18:44.906966393 +0000 UTC m=+1342.725649678" watchObservedRunningTime="2026-02-16 21:18:44.909352836 +0000 UTC m=+1342.728036131" Feb 16 21:18:45 crc kubenswrapper[4805]: I0216 21:18:45.350822 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:18:45 crc kubenswrapper[4805]: I0216 21:18:45.350829 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:18:46 crc kubenswrapper[4805]: I0216 21:18:46.433643 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 21:18:46 crc kubenswrapper[4805]: I0216 21:18:46.493337 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 21:18:46 crc kubenswrapper[4805]: I0216 21:18:46.932329 4805 generic.go:334] "Generic (PLEG): container finished" podID="c8f843e3-43b6-405f-84be-dccbf9dbceac" containerID="0a0f156a551a55e047b6b01dcf925d2bd2fcddb4120b207e38714ce69383c852" exitCode=0 Feb 16 21:18:46 crc kubenswrapper[4805]: I0216 21:18:46.932404 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xnz5m" event={"ID":"c8f843e3-43b6-405f-84be-dccbf9dbceac","Type":"ContainerDied","Data":"0a0f156a551a55e047b6b01dcf925d2bd2fcddb4120b207e38714ce69383c852"} Feb 16 21:18:46 crc kubenswrapper[4805]: I0216 21:18:46.975346 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.391712 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.535526 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rwlp\" (UniqueName: \"kubernetes.io/projected/c8f843e3-43b6-405f-84be-dccbf9dbceac-kube-api-access-9rwlp\") pod \"c8f843e3-43b6-405f-84be-dccbf9dbceac\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.535852 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-scripts\") pod \"c8f843e3-43b6-405f-84be-dccbf9dbceac\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.535934 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-combined-ca-bundle\") pod \"c8f843e3-43b6-405f-84be-dccbf9dbceac\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.535991 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-config-data\") pod \"c8f843e3-43b6-405f-84be-dccbf9dbceac\" (UID: \"c8f843e3-43b6-405f-84be-dccbf9dbceac\") " Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.541487 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f843e3-43b6-405f-84be-dccbf9dbceac-kube-api-access-9rwlp" (OuterVolumeSpecName: "kube-api-access-9rwlp") pod "c8f843e3-43b6-405f-84be-dccbf9dbceac" (UID: "c8f843e3-43b6-405f-84be-dccbf9dbceac"). InnerVolumeSpecName "kube-api-access-9rwlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.541515 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-scripts" (OuterVolumeSpecName: "scripts") pod "c8f843e3-43b6-405f-84be-dccbf9dbceac" (UID: "c8f843e3-43b6-405f-84be-dccbf9dbceac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.572626 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8f843e3-43b6-405f-84be-dccbf9dbceac" (UID: "c8f843e3-43b6-405f-84be-dccbf9dbceac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.581861 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-config-data" (OuterVolumeSpecName: "config-data") pod "c8f843e3-43b6-405f-84be-dccbf9dbceac" (UID: "c8f843e3-43b6-405f-84be-dccbf9dbceac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.642007 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rwlp\" (UniqueName: \"kubernetes.io/projected/c8f843e3-43b6-405f-84be-dccbf9dbceac-kube-api-access-9rwlp\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.642936 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.642948 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.642956 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f843e3-43b6-405f-84be-dccbf9dbceac-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.968495 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xnz5m" event={"ID":"c8f843e3-43b6-405f-84be-dccbf9dbceac","Type":"ContainerDied","Data":"748db38cd029cbb1c7c2cb293825147c20fd5f8a5636510284d4519c6ffb5447"} Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.968903 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="748db38cd029cbb1c7c2cb293825147c20fd5f8a5636510284d4519c6ffb5447" Feb 16 21:18:48 crc kubenswrapper[4805]: I0216 21:18:48.969108 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xnz5m" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.164758 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 21:18:49 crc kubenswrapper[4805]: E0216 21:18:49.168204 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f843e3-43b6-405f-84be-dccbf9dbceac" containerName="aodh-db-sync" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.168241 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f843e3-43b6-405f-84be-dccbf9dbceac" containerName="aodh-db-sync" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.168443 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f843e3-43b6-405f-84be-dccbf9dbceac" containerName="aodh-db-sync" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.217148 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.217345 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.220641 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-8mrcb" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.221267 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.221431 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.265480 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.265946 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.363791 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-combined-ca-bundle\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.364348 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-config-data\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.365112 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-scripts\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.365289 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvw7\" (UniqueName: \"kubernetes.io/projected/52add033-f900-449f-a793-bed363692402-kube-api-access-rrvw7\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.467271 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-combined-ca-bundle\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.467370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-config-data\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.467392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-scripts\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.467453 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvw7\" (UniqueName: \"kubernetes.io/projected/52add033-f900-449f-a793-bed363692402-kube-api-access-rrvw7\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.475061 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-config-data\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.476274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-combined-ca-bundle\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.479430 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-scripts\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.488343 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvw7\" (UniqueName: \"kubernetes.io/projected/52add033-f900-449f-a793-bed363692402-kube-api-access-rrvw7\") pod \"aodh-0\" (UID: \"52add033-f900-449f-a793-bed363692402\") " pod="openstack/aodh-0" Feb 16 21:18:49 crc kubenswrapper[4805]: I0216 21:18:49.534620 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 21:18:50 crc kubenswrapper[4805]: I0216 21:18:50.028557 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 21:18:50 crc kubenswrapper[4805]: I0216 21:18:50.347892 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.248:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:18:50 crc kubenswrapper[4805]: I0216 21:18:50.347892 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.248:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:18:50 crc kubenswrapper[4805]: I0216 21:18:50.989092 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerStarted","Data":"2ae86fc06ab1f362cb0d30be8da2695e2a4a8deaf2fabe2fb8a9f5c5c0c4b260"} Feb 16 21:18:51 crc kubenswrapper[4805]: I0216 21:18:51.863066 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:51 crc kubenswrapper[4805]: I0216 21:18:51.864266 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="ceilometer-central-agent" containerID="cri-o://a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7" gracePeriod=30 Feb 16 21:18:51 crc kubenswrapper[4805]: I0216 21:18:51.864302 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="proxy-httpd" containerID="cri-o://c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179" gracePeriod=30 Feb 16 21:18:51 crc kubenswrapper[4805]: I0216 21:18:51.864336 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="ceilometer-notification-agent" containerID="cri-o://7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc" gracePeriod=30 Feb 16 21:18:51 crc kubenswrapper[4805]: I0216 21:18:51.864392 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="sg-core" containerID="cri-o://4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1" gracePeriod=30 Feb 16 21:18:51 crc kubenswrapper[4805]: I0216 21:18:51.871238 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.242:3000/\": EOF" Feb 16 21:18:52 crc kubenswrapper[4805]: I0216 21:18:52.011732 4805 generic.go:334] "Generic (PLEG): container finished" podID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerID="c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179" exitCode=0 Feb 16 21:18:52 crc kubenswrapper[4805]: I0216 21:18:52.011749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerDied","Data":"c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179"} Feb 16 21:18:52 crc kubenswrapper[4805]: I0216 21:18:52.011802 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerDied","Data":"4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1"} Feb 16 21:18:52 crc kubenswrapper[4805]: I0216 21:18:52.011769 4805 generic.go:334] "Generic (PLEG): container finished" podID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerID="4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1" exitCode=2 Feb 16 21:18:52 crc kubenswrapper[4805]: I0216 21:18:52.013929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerStarted","Data":"7a78482f088aaddd4a08ddbac4f4f23383170fd402187473bc5e68d9509d11c2"} Feb 16 21:18:52 crc kubenswrapper[4805]: I0216 21:18:52.325292 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 21:18:53 crc kubenswrapper[4805]: I0216 21:18:53.038154 4805 generic.go:334] "Generic (PLEG): container finished" podID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerID="a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7" exitCode=0 Feb 16 21:18:53 crc kubenswrapper[4805]: I0216 21:18:53.038232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerDied","Data":"a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7"} Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.059878 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerStarted","Data":"6fd31cd7da6dd5521470bb4f12f11f2ab2eee3f5575319475b37cb11c463df04"} Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.350216 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.364586 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.365061 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.786899 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.826584 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-config-data\") pod \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.826747 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-sg-core-conf-yaml\") pod \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.826796 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-run-httpd\") pod \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.826828 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-combined-ca-bundle\") pod \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.826897 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mrbw\" (UniqueName: \"kubernetes.io/projected/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-kube-api-access-4mrbw\") pod \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.826996 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-scripts\") pod \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.827034 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-log-httpd\") pod \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\" (UID: \"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6\") " Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.828103 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" (UID: "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.828438 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" (UID: "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.833089 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-kube-api-access-4mrbw" (OuterVolumeSpecName: "kube-api-access-4mrbw") pod "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" (UID: "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6"). InnerVolumeSpecName "kube-api-access-4mrbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.833502 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-scripts" (OuterVolumeSpecName: "scripts") pod "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" (UID: "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.863136 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" (UID: "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.929460 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mrbw\" (UniqueName: \"kubernetes.io/projected/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-kube-api-access-4mrbw\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.929489 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.929498 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.929507 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.929514 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.939552 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" (UID: "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:54 crc kubenswrapper[4805]: I0216 21:18:54.970986 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-config-data" (OuterVolumeSpecName: "config-data") pod "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" (UID: "f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.031394 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.031431 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.077197 4805 generic.go:334] "Generic (PLEG): container finished" podID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerID="7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc" exitCode=0 Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.077251 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerDied","Data":"7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc"} Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.077277 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6","Type":"ContainerDied","Data":"db5a9f186a89018071f58c86318d3c25f552f37bc000aa0dd1818b3449c0a820"} Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.077294 4805 scope.go:117] "RemoveContainer" containerID="c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.077429 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.099617 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerStarted","Data":"0347c5e11743da60f38dd839acda381bd17e18f1dc5d354f7c10302bfee8ceed"} Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.106981 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.242866 4805 scope.go:117] "RemoveContainer" containerID="4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.259773 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.276008 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.284273 4805 scope.go:117] "RemoveContainer" containerID="7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.289004 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:55 crc kubenswrapper[4805]: E0216 21:18:55.289438 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="proxy-httpd" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.289453 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="proxy-httpd" Feb 16 21:18:55 crc kubenswrapper[4805]: E0216 21:18:55.289470 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="sg-core" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.289477 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="sg-core" Feb 16 21:18:55 crc kubenswrapper[4805]: E0216 21:18:55.289517 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="ceilometer-notification-agent" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.289523 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="ceilometer-notification-agent" Feb 16 21:18:55 crc kubenswrapper[4805]: E0216 21:18:55.289540 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="ceilometer-central-agent" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.289547 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="ceilometer-central-agent" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.289743 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="ceilometer-notification-agent" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.289775 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="sg-core" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.289785 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="ceilometer-central-agent" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.289799 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" containerName="proxy-httpd" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.291801 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.300099 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.329734 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.330989 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.355446 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-scripts\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.355527 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.355580 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-config-data\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.355670 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.355694 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-run-httpd\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.355809 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-log-httpd\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.355829 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j75n\" (UniqueName: \"kubernetes.io/projected/930c2405-0fd0-4bb2-921e-cbbc031e1c67-kube-api-access-7j75n\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.401124 4805 scope.go:117] "RemoveContainer" containerID="a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.423608 4805 scope.go:117] "RemoveContainer" containerID="c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179" Feb 16 21:18:55 crc kubenswrapper[4805]: E0216 21:18:55.424124 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179\": container with ID starting with c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179 not found: ID does not exist" containerID="c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.424168 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179"} err="failed to get container status \"c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179\": rpc error: code = NotFound desc = could not find container \"c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179\": container with ID starting with c288f24668d43104966ca555e2f9038aea36049d327d279e2628c0cd5018b179 not found: ID does not exist" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.424201 4805 scope.go:117] "RemoveContainer" containerID="4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1" Feb 16 21:18:55 crc kubenswrapper[4805]: E0216 21:18:55.424613 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1\": container with ID starting with 4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1 not found: ID does not exist" containerID="4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.424642 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1"} err="failed to get container status \"4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1\": rpc error: code = NotFound desc = could not find container \"4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1\": container with ID starting with 4989c01034cd7a34b1e53c7c1a92e9b2d270334b3f5fe81d92668536771b76a1 not found: ID does not exist" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.424656 4805 scope.go:117] "RemoveContainer" containerID="7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc" Feb 16 21:18:55 crc kubenswrapper[4805]: E0216 21:18:55.424887 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc\": container with ID starting with 7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc not found: ID does not exist" containerID="7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.424908 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc"} err="failed to get container status \"7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc\": rpc error: code = NotFound desc = could not find container \"7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc\": container with ID starting with 7ea36171586f1e8760e5ae6fb60ec520f0bc6856883b28740318dc4c295ddbfc not found: ID does not exist" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.424921 4805 scope.go:117] "RemoveContainer" containerID="a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7" Feb 16 21:18:55 crc kubenswrapper[4805]: E0216 21:18:55.425152 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7\": container with ID starting with a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7 not found: ID does not exist" containerID="a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.425176 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7"} err="failed to get container status \"a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7\": rpc error: code = NotFound desc = could not find container \"a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7\": container with ID starting with a06a1830f47f7953db233a3f08d0174e73292a7480b117552d85daefc98366e7 not found: ID does not exist" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.458131 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.458169 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-run-httpd\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.458266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-log-httpd\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.458283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j75n\" (UniqueName: \"kubernetes.io/projected/930c2405-0fd0-4bb2-921e-cbbc031e1c67-kube-api-access-7j75n\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.458301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-scripts\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.458342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.458380 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-config-data\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.458712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-run-httpd\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.458951 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-log-httpd\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.462406 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.463531 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.464234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-scripts\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.464462 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-config-data\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.477516 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j75n\" (UniqueName: \"kubernetes.io/projected/930c2405-0fd0-4bb2-921e-cbbc031e1c67-kube-api-access-7j75n\") pod \"ceilometer-0\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " pod="openstack/ceilometer-0" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.613028 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6" path="/var/lib/kubelet/pods/f702fe5f-7446-4d4e-bfa2-f8273a3cf2f6/volumes" Feb 16 21:18:55 crc kubenswrapper[4805]: I0216 21:18:55.658170 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:18:56 crc kubenswrapper[4805]: I0216 21:18:56.162047 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:18:56 crc kubenswrapper[4805]: W0216 21:18:56.174206 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod930c2405_0fd0_4bb2_921e_cbbc031e1c67.slice/crio-d8e08992169d0ec8810541d0cd4000fc9a943769b0baf4456e2a354d1edcba8b WatchSource:0}: Error finding container d8e08992169d0ec8810541d0cd4000fc9a943769b0baf4456e2a354d1edcba8b: Status 404 returned error can't find the container with id d8e08992169d0ec8810541d0cd4000fc9a943769b0baf4456e2a354d1edcba8b Feb 16 21:18:57 crc kubenswrapper[4805]: I0216 21:18:57.127424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerStarted","Data":"c499819ce413d7af9d1e217f801dfb6867d4dff0a8f61937a1fa7b8981a46c4e"} Feb 16 21:18:57 crc kubenswrapper[4805]: I0216 21:18:57.129222 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-api" containerID="cri-o://7a78482f088aaddd4a08ddbac4f4f23383170fd402187473bc5e68d9509d11c2" gracePeriod=30 Feb 16 21:18:57 crc kubenswrapper[4805]: I0216 21:18:57.129953 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-listener" containerID="cri-o://c499819ce413d7af9d1e217f801dfb6867d4dff0a8f61937a1fa7b8981a46c4e" gracePeriod=30 Feb 16 21:18:57 crc kubenswrapper[4805]: I0216 21:18:57.129981 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-notifier" containerID="cri-o://0347c5e11743da60f38dd839acda381bd17e18f1dc5d354f7c10302bfee8ceed" gracePeriod=30 Feb 16 21:18:57 crc kubenswrapper[4805]: I0216 21:18:57.130001 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-evaluator" containerID="cri-o://6fd31cd7da6dd5521470bb4f12f11f2ab2eee3f5575319475b37cb11c463df04" gracePeriod=30 Feb 16 21:18:57 crc kubenswrapper[4805]: I0216 21:18:57.136372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerStarted","Data":"5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00"} Feb 16 21:18:57 crc kubenswrapper[4805]: I0216 21:18:57.136416 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerStarted","Data":"d8e08992169d0ec8810541d0cd4000fc9a943769b0baf4456e2a354d1edcba8b"} Feb 16 21:18:57 crc kubenswrapper[4805]: I0216 21:18:57.164701 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.974642459 podStartE2EDuration="8.16468515s" podCreationTimestamp="2026-02-16 21:18:49 +0000 UTC" firstStartedPulling="2026-02-16 21:18:50.034539835 +0000 UTC m=+1347.853223130" lastFinishedPulling="2026-02-16 21:18:56.224582526 +0000 UTC m=+1354.043265821" observedRunningTime="2026-02-16 21:18:57.162054409 +0000 UTC m=+1354.980737704" watchObservedRunningTime="2026-02-16 21:18:57.16468515 +0000 UTC m=+1354.983368445" Feb 16 21:18:57 crc kubenswrapper[4805]: I0216 21:18:57.975000 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.024176 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-config-data\") pod \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.024233 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-combined-ca-bundle\") pod \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.024365 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mblz8\" (UniqueName: \"kubernetes.io/projected/dd638c31-2ddf-4958-8149-7f3ebcb9b844-kube-api-access-mblz8\") pod \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\" (UID: \"dd638c31-2ddf-4958-8149-7f3ebcb9b844\") " Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.034364 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd638c31-2ddf-4958-8149-7f3ebcb9b844-kube-api-access-mblz8" (OuterVolumeSpecName: "kube-api-access-mblz8") pod "dd638c31-2ddf-4958-8149-7f3ebcb9b844" (UID: "dd638c31-2ddf-4958-8149-7f3ebcb9b844"). InnerVolumeSpecName "kube-api-access-mblz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.061387 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-config-data" (OuterVolumeSpecName: "config-data") pod "dd638c31-2ddf-4958-8149-7f3ebcb9b844" (UID: "dd638c31-2ddf-4958-8149-7f3ebcb9b844"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.070969 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd638c31-2ddf-4958-8149-7f3ebcb9b844" (UID: "dd638c31-2ddf-4958-8149-7f3ebcb9b844"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.127084 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mblz8\" (UniqueName: \"kubernetes.io/projected/dd638c31-2ddf-4958-8149-7f3ebcb9b844-kube-api-access-mblz8\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.127133 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.127146 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd638c31-2ddf-4958-8149-7f3ebcb9b844-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.148997 4805 generic.go:334] "Generic (PLEG): container finished" podID="dd638c31-2ddf-4958-8149-7f3ebcb9b844" containerID="2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f" exitCode=137 Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.149220 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dd638c31-2ddf-4958-8149-7f3ebcb9b844","Type":"ContainerDied","Data":"2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f"} Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.149296 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dd638c31-2ddf-4958-8149-7f3ebcb9b844","Type":"ContainerDied","Data":"2b8500ff0fefd0bc95cce6f866de8b24bb9c098c3a97bb5fbe1a94979eaef23c"} Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.149453 4805 scope.go:117] "RemoveContainer" containerID="2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.149787 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.158204 4805 generic.go:334] "Generic (PLEG): container finished" podID="52add033-f900-449f-a793-bed363692402" containerID="0347c5e11743da60f38dd839acda381bd17e18f1dc5d354f7c10302bfee8ceed" exitCode=0 Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.158251 4805 generic.go:334] "Generic (PLEG): container finished" podID="52add033-f900-449f-a793-bed363692402" containerID="6fd31cd7da6dd5521470bb4f12f11f2ab2eee3f5575319475b37cb11c463df04" exitCode=0 Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.158261 4805 generic.go:334] "Generic (PLEG): container finished" podID="52add033-f900-449f-a793-bed363692402" containerID="7a78482f088aaddd4a08ddbac4f4f23383170fd402187473bc5e68d9509d11c2" exitCode=0 Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.158343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerDied","Data":"0347c5e11743da60f38dd839acda381bd17e18f1dc5d354f7c10302bfee8ceed"} Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.158398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerDied","Data":"6fd31cd7da6dd5521470bb4f12f11f2ab2eee3f5575319475b37cb11c463df04"} Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.158411 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerDied","Data":"7a78482f088aaddd4a08ddbac4f4f23383170fd402187473bc5e68d9509d11c2"} Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.165056 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerStarted","Data":"12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b"} Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.207005 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.207843 4805 scope.go:117] "RemoveContainer" containerID="2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f" Feb 16 21:18:58 crc kubenswrapper[4805]: E0216 21:18:58.214283 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f\": container with ID starting with 2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f not found: ID does not exist" containerID="2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.214348 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f"} err="failed to get container status \"2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f\": rpc error: code = NotFound desc = could not find container \"2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f\": container with ID starting with 2d577b00b60ad8fdc8b3413587d9ee48c051aaae60ca9194e941e09d5864fd4f not found: ID does not exist" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.219672 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.232791 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:18:58 crc kubenswrapper[4805]: E0216 21:18:58.233568 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd638c31-2ddf-4958-8149-7f3ebcb9b844" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.233585 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd638c31-2ddf-4958-8149-7f3ebcb9b844" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.233855 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd638c31-2ddf-4958-8149-7f3ebcb9b844" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.234902 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.237741 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.237905 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.238026 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.240668 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.331559 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.331991 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.332097 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.332192 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28pgh\" (UniqueName: \"kubernetes.io/projected/1958022f-e55d-473a-8a90-1c3238569c9c-kube-api-access-28pgh\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.332335 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.434986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.435288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.435508 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28pgh\" (UniqueName: \"kubernetes.io/projected/1958022f-e55d-473a-8a90-1c3238569c9c-kube-api-access-28pgh\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.435781 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.436567 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.441025 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.441684 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.441695 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.442427 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1958022f-e55d-473a-8a90-1c3238569c9c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.464454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28pgh\" (UniqueName: \"kubernetes.io/projected/1958022f-e55d-473a-8a90-1c3238569c9c-kube-api-access-28pgh\") pod \"nova-cell1-novncproxy-0\" (UID: \"1958022f-e55d-473a-8a90-1c3238569c9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:58 crc kubenswrapper[4805]: I0216 21:18:58.568228 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:18:59 crc kubenswrapper[4805]: I0216 21:18:59.099076 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:18:59 crc kubenswrapper[4805]: W0216 21:18:59.105048 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1958022f_e55d_473a_8a90_1c3238569c9c.slice/crio-04ad0ad71c52ddca854c67546f8e72f1352764e02117725ce7eb57ca2b5db95e WatchSource:0}: Error finding container 04ad0ad71c52ddca854c67546f8e72f1352764e02117725ce7eb57ca2b5db95e: Status 404 returned error can't find the container with id 04ad0ad71c52ddca854c67546f8e72f1352764e02117725ce7eb57ca2b5db95e Feb 16 21:18:59 crc kubenswrapper[4805]: I0216 21:18:59.200418 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerStarted","Data":"0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3"} Feb 16 21:18:59 crc kubenswrapper[4805]: I0216 21:18:59.203829 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1958022f-e55d-473a-8a90-1c3238569c9c","Type":"ContainerStarted","Data":"04ad0ad71c52ddca854c67546f8e72f1352764e02117725ce7eb57ca2b5db95e"} Feb 16 21:18:59 crc kubenswrapper[4805]: I0216 21:18:59.272300 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:18:59 crc kubenswrapper[4805]: I0216 21:18:59.272815 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:18:59 crc kubenswrapper[4805]: I0216 21:18:59.276237 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:18:59 crc kubenswrapper[4805]: I0216 21:18:59.276512 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:18:59 crc kubenswrapper[4805]: I0216 21:18:59.614873 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd638c31-2ddf-4958-8149-7f3ebcb9b844" path="/var/lib/kubelet/pods/dd638c31-2ddf-4958-8149-7f3ebcb9b844/volumes" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.216705 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1958022f-e55d-473a-8a90-1c3238569c9c","Type":"ContainerStarted","Data":"734b83f9ea6c48eb3bdf27b10c413f6f9a25cf2afc85999eaec669f896a1224a"} Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.219320 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerStarted","Data":"5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1"} Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.219944 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.222974 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.243463 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.243439579 podStartE2EDuration="2.243439579s" podCreationTimestamp="2026-02-16 21:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:19:00.236954575 +0000 UTC m=+1358.055637870" watchObservedRunningTime="2026-02-16 21:19:00.243439579 +0000 UTC m=+1358.062122914" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.285042 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.789704331 podStartE2EDuration="5.285017526s" podCreationTimestamp="2026-02-16 21:18:55 +0000 UTC" firstStartedPulling="2026-02-16 21:18:56.177217564 +0000 UTC m=+1353.995900859" lastFinishedPulling="2026-02-16 21:18:59.672530759 +0000 UTC m=+1357.491214054" observedRunningTime="2026-02-16 21:19:00.282500858 +0000 UTC m=+1358.101184173" watchObservedRunningTime="2026-02-16 21:19:00.285017526 +0000 UTC m=+1358.103700841" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.447183 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-lpsc5"] Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.449489 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.479927 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-lpsc5"] Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.505663 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wdv\" (UniqueName: \"kubernetes.io/projected/a0874a96-7e2d-4cf2-847f-50d9b97704eb-kube-api-access-d7wdv\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.505788 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-config\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.505840 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.505899 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.506053 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.506108 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.607736 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.607803 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.607883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wdv\" (UniqueName: \"kubernetes.io/projected/a0874a96-7e2d-4cf2-847f-50d9b97704eb-kube-api-access-d7wdv\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.607911 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-config\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.607948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.607995 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.608945 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.609058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.609084 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.609069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-config\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.609536 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.637555 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wdv\" (UniqueName: \"kubernetes.io/projected/a0874a96-7e2d-4cf2-847f-50d9b97704eb-kube-api-access-d7wdv\") pod \"dnsmasq-dns-f84f9ccf-lpsc5\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:00 crc kubenswrapper[4805]: I0216 21:19:00.782739 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:01 crc kubenswrapper[4805]: I0216 21:19:01.233563 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:19:01 crc kubenswrapper[4805]: W0216 21:19:01.419009 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0874a96_7e2d_4cf2_847f_50d9b97704eb.slice/crio-ddb2611965d3eca6ffaaaa0b164f9b8a1ef7a40128f2ca395a664b14cda0352e WatchSource:0}: Error finding container ddb2611965d3eca6ffaaaa0b164f9b8a1ef7a40128f2ca395a664b14cda0352e: Status 404 returned error can't find the container with id ddb2611965d3eca6ffaaaa0b164f9b8a1ef7a40128f2ca395a664b14cda0352e Feb 16 21:19:01 crc kubenswrapper[4805]: I0216 21:19:01.424092 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-lpsc5"] Feb 16 21:19:02 crc kubenswrapper[4805]: I0216 21:19:02.246858 4805 generic.go:334] "Generic (PLEG): container finished" podID="a0874a96-7e2d-4cf2-847f-50d9b97704eb" containerID="cf041677e976787d1719e0a1abded87258aeac9018aab4a10af97559642086d5" exitCode=0 Feb 16 21:19:02 crc kubenswrapper[4805]: I0216 21:19:02.247000 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" event={"ID":"a0874a96-7e2d-4cf2-847f-50d9b97704eb","Type":"ContainerDied","Data":"cf041677e976787d1719e0a1abded87258aeac9018aab4a10af97559642086d5"} Feb 16 21:19:02 crc kubenswrapper[4805]: I0216 21:19:02.248087 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" event={"ID":"a0874a96-7e2d-4cf2-847f-50d9b97704eb","Type":"ContainerStarted","Data":"ddb2611965d3eca6ffaaaa0b164f9b8a1ef7a40128f2ca395a664b14cda0352e"} Feb 16 21:19:02 crc kubenswrapper[4805]: I0216 21:19:02.883206 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.266770 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-log" containerID="cri-o://35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385" gracePeriod=30 Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.267172 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-api" containerID="cri-o://e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb" gracePeriod=30 Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.267432 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" event={"ID":"a0874a96-7e2d-4cf2-847f-50d9b97704eb","Type":"ContainerStarted","Data":"5570b75c2abbcaab944e1429f72c20e8a04af4c2cb31f4121fd8486bd08bd1c6"} Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.267883 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.295953 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" podStartSLOduration=3.295933413 podStartE2EDuration="3.295933413s" podCreationTimestamp="2026-02-16 21:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:19:03.293752565 +0000 UTC m=+1361.112435860" watchObservedRunningTime="2026-02-16 21:19:03.295933413 +0000 UTC m=+1361.114616708" Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.568566 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.610997 4805 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod205f4efe-0a2d-4d28-a929-c89b671cefae"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod205f4efe-0a2d-4d28-a929-c89b671cefae] : Timed out while waiting for systemd to remove kubepods-besteffort-pod205f4efe_0a2d_4d28_a929_c89b671cefae.slice" Feb 16 21:19:03 crc kubenswrapper[4805]: E0216 21:19:03.611052 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod205f4efe-0a2d-4d28-a929-c89b671cefae] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod205f4efe-0a2d-4d28-a929-c89b671cefae] : Timed out while waiting for systemd to remove kubepods-besteffort-pod205f4efe_0a2d_4d28_a929_c89b671cefae.slice" pod="openstack/nova-cell1-conductor-db-sync-gxz95" podUID="205f4efe-0a2d-4d28-a929-c89b671cefae" Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.652487 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.652793 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="ceilometer-central-agent" containerID="cri-o://5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00" gracePeriod=30 Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.652924 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="proxy-httpd" containerID="cri-o://5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1" gracePeriod=30 Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.652965 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="sg-core" containerID="cri-o://0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3" gracePeriod=30 Feb 16 21:19:03 crc kubenswrapper[4805]: I0216 21:19:03.653008 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="ceilometer-notification-agent" containerID="cri-o://12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b" gracePeriod=30 Feb 16 21:19:04 crc kubenswrapper[4805]: I0216 21:19:04.276851 4805 generic.go:334] "Generic (PLEG): container finished" podID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerID="35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385" exitCode=143 Feb 16 21:19:04 crc kubenswrapper[4805]: I0216 21:19:04.277013 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10779c3-e063-42aa-9d58-0c5687bc0dfc","Type":"ContainerDied","Data":"35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385"} Feb 16 21:19:04 crc kubenswrapper[4805]: I0216 21:19:04.279453 4805 generic.go:334] "Generic (PLEG): container finished" podID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerID="5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1" exitCode=0 Feb 16 21:19:04 crc kubenswrapper[4805]: I0216 21:19:04.279479 4805 generic.go:334] "Generic (PLEG): container finished" podID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerID="0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3" exitCode=2 Feb 16 21:19:04 crc kubenswrapper[4805]: I0216 21:19:04.279486 4805 generic.go:334] "Generic (PLEG): container finished" podID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerID="12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b" exitCode=0 Feb 16 21:19:04 crc kubenswrapper[4805]: I0216 21:19:04.279479 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerDied","Data":"5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1"} Feb 16 21:19:04 crc kubenswrapper[4805]: I0216 21:19:04.279531 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerDied","Data":"0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3"} Feb 16 21:19:04 crc kubenswrapper[4805]: I0216 21:19:04.279543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerDied","Data":"12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b"} Feb 16 21:19:04 crc kubenswrapper[4805]: I0216 21:19:04.279735 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gxz95" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.290218 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.307372 4805 generic.go:334] "Generic (PLEG): container finished" podID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerID="5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00" exitCode=0 Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.307453 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.307460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerDied","Data":"5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00"} Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.307615 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"930c2405-0fd0-4bb2-921e-cbbc031e1c67","Type":"ContainerDied","Data":"d8e08992169d0ec8810541d0cd4000fc9a943769b0baf4456e2a354d1edcba8b"} Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.307657 4805 scope.go:117] "RemoveContainer" containerID="5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.336159 4805 scope.go:117] "RemoveContainer" containerID="0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.381629 4805 scope.go:117] "RemoveContainer" containerID="12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.387501 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-log-httpd\") pod \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.387613 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-config-data\") pod \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.387677 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j75n\" (UniqueName: \"kubernetes.io/projected/930c2405-0fd0-4bb2-921e-cbbc031e1c67-kube-api-access-7j75n\") pod \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.387734 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-combined-ca-bundle\") pod \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.387801 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-scripts\") pod \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.387839 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-run-httpd\") pod \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.387871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-sg-core-conf-yaml\") pod \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\" (UID: \"930c2405-0fd0-4bb2-921e-cbbc031e1c67\") " Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.388937 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "930c2405-0fd0-4bb2-921e-cbbc031e1c67" (UID: "930c2405-0fd0-4bb2-921e-cbbc031e1c67"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.389960 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "930c2405-0fd0-4bb2-921e-cbbc031e1c67" (UID: "930c2405-0fd0-4bb2-921e-cbbc031e1c67"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.410249 4805 scope.go:117] "RemoveContainer" containerID="5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.415175 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-scripts" (OuterVolumeSpecName: "scripts") pod "930c2405-0fd0-4bb2-921e-cbbc031e1c67" (UID: "930c2405-0fd0-4bb2-921e-cbbc031e1c67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.415175 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930c2405-0fd0-4bb2-921e-cbbc031e1c67-kube-api-access-7j75n" (OuterVolumeSpecName: "kube-api-access-7j75n") pod "930c2405-0fd0-4bb2-921e-cbbc031e1c67" (UID: "930c2405-0fd0-4bb2-921e-cbbc031e1c67"). InnerVolumeSpecName "kube-api-access-7j75n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.435553 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "930c2405-0fd0-4bb2-921e-cbbc031e1c67" (UID: "930c2405-0fd0-4bb2-921e-cbbc031e1c67"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.477294 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "930c2405-0fd0-4bb2-921e-cbbc031e1c67" (UID: "930c2405-0fd0-4bb2-921e-cbbc031e1c67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.491035 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j75n\" (UniqueName: \"kubernetes.io/projected/930c2405-0fd0-4bb2-921e-cbbc031e1c67-kube-api-access-7j75n\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.491076 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.491090 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.491103 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.491117 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.491128 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/930c2405-0fd0-4bb2-921e-cbbc031e1c67-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.516356 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-config-data" (OuterVolumeSpecName: "config-data") pod "930c2405-0fd0-4bb2-921e-cbbc031e1c67" (UID: "930c2405-0fd0-4bb2-921e-cbbc031e1c67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.541510 4805 scope.go:117] "RemoveContainer" containerID="5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1" Feb 16 21:19:06 crc kubenswrapper[4805]: E0216 21:19:06.541987 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1\": container with ID starting with 5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1 not found: ID does not exist" containerID="5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.542032 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1"} err="failed to get container status \"5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1\": rpc error: code = NotFound desc = could not find container \"5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1\": container with ID starting with 5d46ec47243b8caefd7a98defb94aa8472c9da70c0e7365f9e3260e9736815c1 not found: ID does not exist" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.542061 4805 scope.go:117] "RemoveContainer" containerID="0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3" Feb 16 21:19:06 crc kubenswrapper[4805]: E0216 21:19:06.542410 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3\": container with ID starting with 0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3 not found: ID does not exist" containerID="0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.542443 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3"} err="failed to get container status \"0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3\": rpc error: code = NotFound desc = could not find container \"0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3\": container with ID starting with 0774daaa6acaf2769e21300a0eb2f9daa179f57320fd7d31dc5d3b46fc91f3c3 not found: ID does not exist" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.542465 4805 scope.go:117] "RemoveContainer" containerID="12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b" Feb 16 21:19:06 crc kubenswrapper[4805]: E0216 21:19:06.542706 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b\": container with ID starting with 12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b not found: ID does not exist" containerID="12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.542826 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b"} err="failed to get container status \"12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b\": rpc error: code = NotFound desc = could not find container \"12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b\": container with ID starting with 12673d3caa988520bccf8017bcda7a7d5fa817d10c90c463530204bab41d1b0b not found: ID does not exist" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.542840 4805 scope.go:117] "RemoveContainer" containerID="5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00" Feb 16 21:19:06 crc kubenswrapper[4805]: E0216 21:19:06.543070 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00\": container with ID starting with 5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00 not found: ID does not exist" containerID="5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.543093 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00"} err="failed to get container status \"5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00\": rpc error: code = NotFound desc = could not find container \"5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00\": container with ID starting with 5bff7c8854c034508cbcfa82a999597f9a0c3e0614f917c86c8b087b01dceb00 not found: ID does not exist" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.593378 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930c2405-0fd0-4bb2-921e-cbbc031e1c67-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.646582 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.657173 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.676264 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:06 crc kubenswrapper[4805]: E0216 21:19:06.677123 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="ceilometer-central-agent" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.677218 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="ceilometer-central-agent" Feb 16 21:19:06 crc kubenswrapper[4805]: E0216 21:19:06.677296 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="proxy-httpd" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.677346 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="proxy-httpd" Feb 16 21:19:06 crc kubenswrapper[4805]: E0216 21:19:06.677403 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="ceilometer-notification-agent" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.677467 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="ceilometer-notification-agent" Feb 16 21:19:06 crc kubenswrapper[4805]: E0216 21:19:06.677545 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="sg-core" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.677596 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="sg-core" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.677942 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="ceilometer-central-agent" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.678009 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="ceilometer-notification-agent" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.678082 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="sg-core" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.678159 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" containerName="proxy-httpd" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.680905 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.689166 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.689291 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.698208 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.798930 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-config-data\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.798996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-log-httpd\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.799094 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-scripts\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.799129 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.799150 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-run-httpd\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.799171 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9r7k\" (UniqueName: \"kubernetes.io/projected/f0ea1cbd-4507-4c9e-9ae8-968046c89287-kube-api-access-b9r7k\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.799241 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.901360 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-scripts\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.901434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.901464 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-run-httpd\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.901494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9r7k\" (UniqueName: \"kubernetes.io/projected/f0ea1cbd-4507-4c9e-9ae8-968046c89287-kube-api-access-b9r7k\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.901582 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.901642 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-config-data\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.901685 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-log-httpd\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.902007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-run-httpd\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.902112 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-log-httpd\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.907703 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.908770 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.914690 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-scripts\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.916080 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-config-data\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:06 crc kubenswrapper[4805]: I0216 21:19:06.918830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9r7k\" (UniqueName: \"kubernetes.io/projected/f0ea1cbd-4507-4c9e-9ae8-968046c89287-kube-api-access-b9r7k\") pod \"ceilometer-0\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " pod="openstack/ceilometer-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.012048 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.163558 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.310793 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10779c3-e063-42aa-9d58-0c5687bc0dfc-logs\") pod \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.311202 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-combined-ca-bundle\") pod \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.311348 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lmzn\" (UniqueName: \"kubernetes.io/projected/a10779c3-e063-42aa-9d58-0c5687bc0dfc-kube-api-access-5lmzn\") pod \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.311464 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-config-data\") pod \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\" (UID: \"a10779c3-e063-42aa-9d58-0c5687bc0dfc\") " Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.311611 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a10779c3-e063-42aa-9d58-0c5687bc0dfc-logs" (OuterVolumeSpecName: "logs") pod "a10779c3-e063-42aa-9d58-0c5687bc0dfc" (UID: "a10779c3-e063-42aa-9d58-0c5687bc0dfc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.312184 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10779c3-e063-42aa-9d58-0c5687bc0dfc-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.316007 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10779c3-e063-42aa-9d58-0c5687bc0dfc-kube-api-access-5lmzn" (OuterVolumeSpecName: "kube-api-access-5lmzn") pod "a10779c3-e063-42aa-9d58-0c5687bc0dfc" (UID: "a10779c3-e063-42aa-9d58-0c5687bc0dfc"). InnerVolumeSpecName "kube-api-access-5lmzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.339982 4805 generic.go:334] "Generic (PLEG): container finished" podID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerID="e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb" exitCode=0 Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.340183 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.340279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10779c3-e063-42aa-9d58-0c5687bc0dfc","Type":"ContainerDied","Data":"e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb"} Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.340633 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a10779c3-e063-42aa-9d58-0c5687bc0dfc","Type":"ContainerDied","Data":"7b9dbab52e8703e81473cdf84583f56e063de3c59573503585a633af1c3c7922"} Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.340768 4805 scope.go:117] "RemoveContainer" containerID="e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.349399 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-config-data" (OuterVolumeSpecName: "config-data") pod "a10779c3-e063-42aa-9d58-0c5687bc0dfc" (UID: "a10779c3-e063-42aa-9d58-0c5687bc0dfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.363613 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a10779c3-e063-42aa-9d58-0c5687bc0dfc" (UID: "a10779c3-e063-42aa-9d58-0c5687bc0dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.405341 4805 scope.go:117] "RemoveContainer" containerID="35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.413940 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.413971 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lmzn\" (UniqueName: \"kubernetes.io/projected/a10779c3-e063-42aa-9d58-0c5687bc0dfc-kube-api-access-5lmzn\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.413984 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10779c3-e063-42aa-9d58-0c5687bc0dfc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.428371 4805 scope.go:117] "RemoveContainer" containerID="e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb" Feb 16 21:19:07 crc kubenswrapper[4805]: E0216 21:19:07.428770 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb\": container with ID starting with e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb not found: ID does not exist" containerID="e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.428814 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb"} err="failed to get container status \"e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb\": rpc error: code = NotFound desc = could not find container \"e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb\": container with ID starting with e5576874bd2fea52c03666e593f8510e3534649d8c52eaf43fd8ff47c933a7fb not found: ID does not exist" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.428845 4805 scope.go:117] "RemoveContainer" containerID="35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385" Feb 16 21:19:07 crc kubenswrapper[4805]: E0216 21:19:07.429619 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385\": container with ID starting with 35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385 not found: ID does not exist" containerID="35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.429665 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385"} err="failed to get container status \"35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385\": rpc error: code = NotFound desc = could not find container \"35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385\": container with ID starting with 35555335218cf3b86b00807b2a7de5495e58cbc601e54cdff9ad6ff0c5baf385 not found: ID does not exist" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.500914 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.613625 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930c2405-0fd0-4bb2-921e-cbbc031e1c67" path="/var/lib/kubelet/pods/930c2405-0fd0-4bb2-921e-cbbc031e1c67/volumes" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.666194 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.677513 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.686641 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:07 crc kubenswrapper[4805]: E0216 21:19:07.687116 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-api" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.687134 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-api" Feb 16 21:19:07 crc kubenswrapper[4805]: E0216 21:19:07.687174 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-log" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.687180 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-log" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.687381 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-log" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.687406 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" containerName="nova-api-api" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.688639 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.692421 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.692530 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.692530 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.705103 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.826148 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677259c6-ee93-4b2e-88fd-c09b276c3626-logs\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.826502 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkrzk\" (UniqueName: \"kubernetes.io/projected/677259c6-ee93-4b2e-88fd-c09b276c3626-kube-api-access-dkrzk\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.826540 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.826608 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-internal-tls-certs\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.826737 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-config-data\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.826993 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-public-tls-certs\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.929033 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677259c6-ee93-4b2e-88fd-c09b276c3626-logs\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.929085 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkrzk\" (UniqueName: \"kubernetes.io/projected/677259c6-ee93-4b2e-88fd-c09b276c3626-kube-api-access-dkrzk\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.929120 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.929180 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-internal-tls-certs\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.929202 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-config-data\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.929279 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-public-tls-certs\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.929655 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677259c6-ee93-4b2e-88fd-c09b276c3626-logs\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.935027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-config-data\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.935250 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-public-tls-certs\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.936295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-internal-tls-certs\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.939908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:07 crc kubenswrapper[4805]: I0216 21:19:07.945053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkrzk\" (UniqueName: \"kubernetes.io/projected/677259c6-ee93-4b2e-88fd-c09b276c3626-kube-api-access-dkrzk\") pod \"nova-api-0\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " pod="openstack/nova-api-0" Feb 16 21:19:08 crc kubenswrapper[4805]: I0216 21:19:08.015197 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:19:08 crc kubenswrapper[4805]: I0216 21:19:08.368361 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerStarted","Data":"85ae6985528391bd4cdf6b22d739738fe409c31f69ec4645a7c9c42d1a392311"} Feb 16 21:19:08 crc kubenswrapper[4805]: I0216 21:19:08.368786 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerStarted","Data":"2c9a3dec57aa3df45f393ef7fb5d2ee7bcb8ef3a038190e9ab1538b9d420a5bb"} Feb 16 21:19:08 crc kubenswrapper[4805]: I0216 21:19:08.549389 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:08 crc kubenswrapper[4805]: I0216 21:19:08.568386 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:19:08 crc kubenswrapper[4805]: I0216 21:19:08.610899 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.392281 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"677259c6-ee93-4b2e-88fd-c09b276c3626","Type":"ContainerStarted","Data":"e265e652065c04b8ceffcfbd27e62f0d3003d22ae0c07be6378fbf712456415f"} Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.392555 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"677259c6-ee93-4b2e-88fd-c09b276c3626","Type":"ContainerStarted","Data":"bb0ecd220b0d7f379e1864936eb6bdafd5954ec87f38a56b08165ef0c1ff7783"} Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.392565 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"677259c6-ee93-4b2e-88fd-c09b276c3626","Type":"ContainerStarted","Data":"f40120787dd29c3cc966ba5d94000638c0f059685d26467e30bc864f113ab9ab"} Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.395966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerStarted","Data":"2cce352f3e3c73bb1702753a7aec416982d12e1ea833c9f2696564f33c58ab64"} Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.419819 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.423001 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.422979902 podStartE2EDuration="2.422979902s" podCreationTimestamp="2026-02-16 21:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:19:09.407049645 +0000 UTC m=+1367.225732940" watchObservedRunningTime="2026-02-16 21:19:09.422979902 +0000 UTC m=+1367.241663197" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.618313 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10779c3-e063-42aa-9d58-0c5687bc0dfc" path="/var/lib/kubelet/pods/a10779c3-e063-42aa-9d58-0c5687bc0dfc/volumes" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.708001 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-9q4ww"] Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.709529 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.712550 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.712768 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.734150 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9q4ww"] Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.807252 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdhw\" (UniqueName: \"kubernetes.io/projected/86a8a573-3330-4e63-8261-ac19ae7bf18b-kube-api-access-cwdhw\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.807309 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-config-data\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.807444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-scripts\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.807470 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.909996 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwdhw\" (UniqueName: \"kubernetes.io/projected/86a8a573-3330-4e63-8261-ac19ae7bf18b-kube-api-access-cwdhw\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.910045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-config-data\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.910103 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-scripts\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.910124 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.920995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-config-data\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.922848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-scripts\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.923180 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:09 crc kubenswrapper[4805]: I0216 21:19:09.938377 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwdhw\" (UniqueName: \"kubernetes.io/projected/86a8a573-3330-4e63-8261-ac19ae7bf18b-kube-api-access-cwdhw\") pod \"nova-cell1-cell-mapping-9q4ww\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:10 crc kubenswrapper[4805]: I0216 21:19:10.038544 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:10 crc kubenswrapper[4805]: I0216 21:19:10.409419 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerStarted","Data":"1876e419f77841d8a39822a34ad54814d35104c5a2cc91450e8710fb67cb192a"} Feb 16 21:19:10 crc kubenswrapper[4805]: W0216 21:19:10.551895 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86a8a573_3330_4e63_8261_ac19ae7bf18b.slice/crio-c34b99d9332ff831c2537e899e5a3e1ecce7e1ab747eb2e672ea48085d395238 WatchSource:0}: Error finding container c34b99d9332ff831c2537e899e5a3e1ecce7e1ab747eb2e672ea48085d395238: Status 404 returned error can't find the container with id c34b99d9332ff831c2537e899e5a3e1ecce7e1ab747eb2e672ea48085d395238 Feb 16 21:19:10 crc kubenswrapper[4805]: I0216 21:19:10.556129 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9q4ww"] Feb 16 21:19:10 crc kubenswrapper[4805]: I0216 21:19:10.783997 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:19:10 crc kubenswrapper[4805]: I0216 21:19:10.864551 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-9wqlb"] Feb 16 21:19:10 crc kubenswrapper[4805]: I0216 21:19:10.864823 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" podUID="cf368b03-7df6-43f3-ad40-ec381d152021" containerName="dnsmasq-dns" containerID="cri-o://3267befb626341ec1e07249560475c674806caff52d8f8836cc5ffe148d0e403" gracePeriod=10 Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.102428 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" podUID="cf368b03-7df6-43f3-ad40-ec381d152021" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.239:5353: connect: connection refused" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.436543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerStarted","Data":"eb0b0d694244c5dce5fe6c07a502b3aadcd1c2efd5a829e3831533a682709ce8"} Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.438024 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.439947 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9q4ww" event={"ID":"86a8a573-3330-4e63-8261-ac19ae7bf18b","Type":"ContainerStarted","Data":"0c96b82eef487dc9c4b7b65e850d2b4570af7b32de589fa452373ab506c4b702"} Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.440062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9q4ww" event={"ID":"86a8a573-3330-4e63-8261-ac19ae7bf18b","Type":"ContainerStarted","Data":"c34b99d9332ff831c2537e899e5a3e1ecce7e1ab747eb2e672ea48085d395238"} Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.462884 4805 generic.go:334] "Generic (PLEG): container finished" podID="cf368b03-7df6-43f3-ad40-ec381d152021" containerID="3267befb626341ec1e07249560475c674806caff52d8f8836cc5ffe148d0e403" exitCode=0 Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.462944 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" event={"ID":"cf368b03-7df6-43f3-ad40-ec381d152021","Type":"ContainerDied","Data":"3267befb626341ec1e07249560475c674806caff52d8f8836cc5ffe148d0e403"} Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.462972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" event={"ID":"cf368b03-7df6-43f3-ad40-ec381d152021","Type":"ContainerDied","Data":"baead96a06603392e7fdf83a9a3a28d0fe3f8417931fa9b4b910675010d7d7f9"} Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.463008 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baead96a06603392e7fdf83a9a3a28d0fe3f8417931fa9b4b910675010d7d7f9" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.480073 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.491182 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.045805963 podStartE2EDuration="5.491152756s" podCreationTimestamp="2026-02-16 21:19:06 +0000 UTC" firstStartedPulling="2026-02-16 21:19:07.511784144 +0000 UTC m=+1365.330467439" lastFinishedPulling="2026-02-16 21:19:10.957130937 +0000 UTC m=+1368.775814232" observedRunningTime="2026-02-16 21:19:11.457679827 +0000 UTC m=+1369.276363122" watchObservedRunningTime="2026-02-16 21:19:11.491152756 +0000 UTC m=+1369.309836071" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.522173 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-9q4ww" podStartSLOduration=2.522154718 podStartE2EDuration="2.522154718s" podCreationTimestamp="2026-02-16 21:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:19:11.494399074 +0000 UTC m=+1369.313082369" watchObservedRunningTime="2026-02-16 21:19:11.522154718 +0000 UTC m=+1369.340838013" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.556494 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-swift-storage-0\") pod \"cf368b03-7df6-43f3-ad40-ec381d152021\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.556603 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-svc\") pod \"cf368b03-7df6-43f3-ad40-ec381d152021\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.556634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-nb\") pod \"cf368b03-7df6-43f3-ad40-ec381d152021\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.556679 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-config\") pod \"cf368b03-7df6-43f3-ad40-ec381d152021\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.556924 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phvqx\" (UniqueName: \"kubernetes.io/projected/cf368b03-7df6-43f3-ad40-ec381d152021-kube-api-access-phvqx\") pod \"cf368b03-7df6-43f3-ad40-ec381d152021\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.557008 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-sb\") pod \"cf368b03-7df6-43f3-ad40-ec381d152021\" (UID: \"cf368b03-7df6-43f3-ad40-ec381d152021\") " Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.605740 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf368b03-7df6-43f3-ad40-ec381d152021-kube-api-access-phvqx" (OuterVolumeSpecName: "kube-api-access-phvqx") pod "cf368b03-7df6-43f3-ad40-ec381d152021" (UID: "cf368b03-7df6-43f3-ad40-ec381d152021"). InnerVolumeSpecName "kube-api-access-phvqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.640446 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf368b03-7df6-43f3-ad40-ec381d152021" (UID: "cf368b03-7df6-43f3-ad40-ec381d152021"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.655131 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cf368b03-7df6-43f3-ad40-ec381d152021" (UID: "cf368b03-7df6-43f3-ad40-ec381d152021"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.660677 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-config" (OuterVolumeSpecName: "config") pod "cf368b03-7df6-43f3-ad40-ec381d152021" (UID: "cf368b03-7df6-43f3-ad40-ec381d152021"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.661426 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phvqx\" (UniqueName: \"kubernetes.io/projected/cf368b03-7df6-43f3-ad40-ec381d152021-kube-api-access-phvqx\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.661450 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.661460 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.661469 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.681494 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf368b03-7df6-43f3-ad40-ec381d152021" (UID: "cf368b03-7df6-43f3-ad40-ec381d152021"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.687418 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf368b03-7df6-43f3-ad40-ec381d152021" (UID: "cf368b03-7df6-43f3-ad40-ec381d152021"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.763587 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:11 crc kubenswrapper[4805]: I0216 21:19:11.763620 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf368b03-7df6-43f3-ad40-ec381d152021-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:12 crc kubenswrapper[4805]: I0216 21:19:12.473318 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-9wqlb" Feb 16 21:19:12 crc kubenswrapper[4805]: I0216 21:19:12.522401 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-9wqlb"] Feb 16 21:19:12 crc kubenswrapper[4805]: I0216 21:19:12.543485 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-9wqlb"] Feb 16 21:19:13 crc kubenswrapper[4805]: I0216 21:19:13.615665 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf368b03-7df6-43f3-ad40-ec381d152021" path="/var/lib/kubelet/pods/cf368b03-7df6-43f3-ad40-ec381d152021/volumes" Feb 16 21:19:16 crc kubenswrapper[4805]: I0216 21:19:16.537276 4805 generic.go:334] "Generic (PLEG): container finished" podID="86a8a573-3330-4e63-8261-ac19ae7bf18b" containerID="0c96b82eef487dc9c4b7b65e850d2b4570af7b32de589fa452373ab506c4b702" exitCode=0 Feb 16 21:19:16 crc kubenswrapper[4805]: I0216 21:19:16.537533 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9q4ww" event={"ID":"86a8a573-3330-4e63-8261-ac19ae7bf18b","Type":"ContainerDied","Data":"0c96b82eef487dc9c4b7b65e850d2b4570af7b32de589fa452373ab506c4b702"} Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.016268 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.016613 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.229215 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.354258 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-combined-ca-bundle\") pod \"86a8a573-3330-4e63-8261-ac19ae7bf18b\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.354328 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-config-data\") pod \"86a8a573-3330-4e63-8261-ac19ae7bf18b\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.354443 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-scripts\") pod \"86a8a573-3330-4e63-8261-ac19ae7bf18b\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.354594 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwdhw\" (UniqueName: \"kubernetes.io/projected/86a8a573-3330-4e63-8261-ac19ae7bf18b-kube-api-access-cwdhw\") pod \"86a8a573-3330-4e63-8261-ac19ae7bf18b\" (UID: \"86a8a573-3330-4e63-8261-ac19ae7bf18b\") " Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.361620 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a8a573-3330-4e63-8261-ac19ae7bf18b-kube-api-access-cwdhw" (OuterVolumeSpecName: "kube-api-access-cwdhw") pod "86a8a573-3330-4e63-8261-ac19ae7bf18b" (UID: "86a8a573-3330-4e63-8261-ac19ae7bf18b"). InnerVolumeSpecName "kube-api-access-cwdhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.362192 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-scripts" (OuterVolumeSpecName: "scripts") pod "86a8a573-3330-4e63-8261-ac19ae7bf18b" (UID: "86a8a573-3330-4e63-8261-ac19ae7bf18b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.393278 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-config-data" (OuterVolumeSpecName: "config-data") pod "86a8a573-3330-4e63-8261-ac19ae7bf18b" (UID: "86a8a573-3330-4e63-8261-ac19ae7bf18b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.399916 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86a8a573-3330-4e63-8261-ac19ae7bf18b" (UID: "86a8a573-3330-4e63-8261-ac19ae7bf18b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.457455 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwdhw\" (UniqueName: \"kubernetes.io/projected/86a8a573-3330-4e63-8261-ac19ae7bf18b-kube-api-access-cwdhw\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.457646 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.457769 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.457828 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86a8a573-3330-4e63-8261-ac19ae7bf18b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.609415 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9q4ww" event={"ID":"86a8a573-3330-4e63-8261-ac19ae7bf18b","Type":"ContainerDied","Data":"c34b99d9332ff831c2537e899e5a3e1ecce7e1ab747eb2e672ea48085d395238"} Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.609467 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c34b99d9332ff831c2537e899e5a3e1ecce7e1ab747eb2e672ea48085d395238" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.609548 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9q4ww" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.779027 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.779323 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-log" containerID="cri-o://bb0ecd220b0d7f379e1864936eb6bdafd5954ec87f38a56b08165ef0c1ff7783" gracePeriod=30 Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.779489 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-api" containerID="cri-o://e265e652065c04b8ceffcfbd27e62f0d3003d22ae0c07be6378fbf712456415f" gracePeriod=30 Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.790635 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.791032 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="92697e5a-1dd1-40ea-9b55-82b01bef5a3f" containerName="nova-scheduler-scheduler" containerID="cri-o://5b533119dc1f8d0ff94edd0debad63b9c4fc792009f0cbb74cbc51e2f4f11beb" gracePeriod=30 Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.800417 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.255:8774/\": EOF" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.800435 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.255:8774/\": EOF" Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.813854 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.814126 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-log" containerID="cri-o://5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2" gracePeriod=30 Feb 16 21:19:18 crc kubenswrapper[4805]: I0216 21:19:18.814677 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-metadata" containerID="cri-o://fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c" gracePeriod=30 Feb 16 21:19:19 crc kubenswrapper[4805]: I0216 21:19:19.620043 4805 generic.go:334] "Generic (PLEG): container finished" podID="1640d816-9924-451c-b2fd-21abd0975ef8" containerID="5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2" exitCode=143 Feb 16 21:19:19 crc kubenswrapper[4805]: I0216 21:19:19.620107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1640d816-9924-451c-b2fd-21abd0975ef8","Type":"ContainerDied","Data":"5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2"} Feb 16 21:19:19 crc kubenswrapper[4805]: I0216 21:19:19.621963 4805 generic.go:334] "Generic (PLEG): container finished" podID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerID="bb0ecd220b0d7f379e1864936eb6bdafd5954ec87f38a56b08165ef0c1ff7783" exitCode=143 Feb 16 21:19:19 crc kubenswrapper[4805]: I0216 21:19:19.621980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"677259c6-ee93-4b2e-88fd-c09b276c3626","Type":"ContainerDied","Data":"bb0ecd220b0d7f379e1864936eb6bdafd5954ec87f38a56b08165ef0c1ff7783"} Feb 16 21:19:20 crc kubenswrapper[4805]: I0216 21:19:20.645455 4805 generic.go:334] "Generic (PLEG): container finished" podID="92697e5a-1dd1-40ea-9b55-82b01bef5a3f" containerID="5b533119dc1f8d0ff94edd0debad63b9c4fc792009f0cbb74cbc51e2f4f11beb" exitCode=0 Feb 16 21:19:20 crc kubenswrapper[4805]: I0216 21:19:20.645625 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92697e5a-1dd1-40ea-9b55-82b01bef5a3f","Type":"ContainerDied","Data":"5b533119dc1f8d0ff94edd0debad63b9c4fc792009f0cbb74cbc51e2f4f11beb"} Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.091138 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.146275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5ffz\" (UniqueName: \"kubernetes.io/projected/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-kube-api-access-g5ffz\") pod \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.146601 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-combined-ca-bundle\") pod \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.146888 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-config-data\") pod \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\" (UID: \"92697e5a-1dd1-40ea-9b55-82b01bef5a3f\") " Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.179936 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-kube-api-access-g5ffz" (OuterVolumeSpecName: "kube-api-access-g5ffz") pod "92697e5a-1dd1-40ea-9b55-82b01bef5a3f" (UID: "92697e5a-1dd1-40ea-9b55-82b01bef5a3f"). InnerVolumeSpecName "kube-api-access-g5ffz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.240815 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-config-data" (OuterVolumeSpecName: "config-data") pod "92697e5a-1dd1-40ea-9b55-82b01bef5a3f" (UID: "92697e5a-1dd1-40ea-9b55-82b01bef5a3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.255237 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.255270 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5ffz\" (UniqueName: \"kubernetes.io/projected/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-kube-api-access-g5ffz\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.261889 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92697e5a-1dd1-40ea-9b55-82b01bef5a3f" (UID: "92697e5a-1dd1-40ea-9b55-82b01bef5a3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.357345 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92697e5a-1dd1-40ea-9b55-82b01bef5a3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.659517 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92697e5a-1dd1-40ea-9b55-82b01bef5a3f","Type":"ContainerDied","Data":"85fd1f28cd17d6a90460df1a7df8af7b8adcb7eab350bb329c24c7b2287ed959"} Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.659584 4805 scope.go:117] "RemoveContainer" containerID="5b533119dc1f8d0ff94edd0debad63b9c4fc792009f0cbb74cbc51e2f4f11beb" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.659590 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.724513 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.736918 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.754982 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:19:21 crc kubenswrapper[4805]: E0216 21:19:21.755671 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92697e5a-1dd1-40ea-9b55-82b01bef5a3f" containerName="nova-scheduler-scheduler" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.755825 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="92697e5a-1dd1-40ea-9b55-82b01bef5a3f" containerName="nova-scheduler-scheduler" Feb 16 21:19:21 crc kubenswrapper[4805]: E0216 21:19:21.755855 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a8a573-3330-4e63-8261-ac19ae7bf18b" containerName="nova-manage" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.755866 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a8a573-3330-4e63-8261-ac19ae7bf18b" containerName="nova-manage" Feb 16 21:19:21 crc kubenswrapper[4805]: E0216 21:19:21.755892 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf368b03-7df6-43f3-ad40-ec381d152021" containerName="dnsmasq-dns" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.755901 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf368b03-7df6-43f3-ad40-ec381d152021" containerName="dnsmasq-dns" Feb 16 21:19:21 crc kubenswrapper[4805]: E0216 21:19:21.755930 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf368b03-7df6-43f3-ad40-ec381d152021" containerName="init" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.755939 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf368b03-7df6-43f3-ad40-ec381d152021" containerName="init" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.756240 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="92697e5a-1dd1-40ea-9b55-82b01bef5a3f" containerName="nova-scheduler-scheduler" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.756257 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a8a573-3330-4e63-8261-ac19ae7bf18b" containerName="nova-manage" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.756293 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf368b03-7df6-43f3-ad40-ec381d152021" containerName="dnsmasq-dns" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.757302 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.760388 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.770645 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.870504 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0818f43c-cd3e-4a45-9970-d9efedc87f5b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0818f43c-cd3e-4a45-9970-d9efedc87f5b\") " pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.870566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrc5p\" (UniqueName: \"kubernetes.io/projected/0818f43c-cd3e-4a45-9970-d9efedc87f5b-kube-api-access-mrc5p\") pod \"nova-scheduler-0\" (UID: \"0818f43c-cd3e-4a45-9970-d9efedc87f5b\") " pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.870782 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0818f43c-cd3e-4a45-9970-d9efedc87f5b-config-data\") pod \"nova-scheduler-0\" (UID: \"0818f43c-cd3e-4a45-9970-d9efedc87f5b\") " pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.958893 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": read tcp 10.217.0.2:41388->10.217.0.246:8775: read: connection reset by peer" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.962629 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": read tcp 10.217.0.2:41384->10.217.0.246:8775: read: connection reset by peer" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.973836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0818f43c-cd3e-4a45-9970-d9efedc87f5b-config-data\") pod \"nova-scheduler-0\" (UID: \"0818f43c-cd3e-4a45-9970-d9efedc87f5b\") " pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.974157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0818f43c-cd3e-4a45-9970-d9efedc87f5b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0818f43c-cd3e-4a45-9970-d9efedc87f5b\") " pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.974208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrc5p\" (UniqueName: \"kubernetes.io/projected/0818f43c-cd3e-4a45-9970-d9efedc87f5b-kube-api-access-mrc5p\") pod \"nova-scheduler-0\" (UID: \"0818f43c-cd3e-4a45-9970-d9efedc87f5b\") " pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.978953 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0818f43c-cd3e-4a45-9970-d9efedc87f5b-config-data\") pod \"nova-scheduler-0\" (UID: \"0818f43c-cd3e-4a45-9970-d9efedc87f5b\") " pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.978999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0818f43c-cd3e-4a45-9970-d9efedc87f5b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0818f43c-cd3e-4a45-9970-d9efedc87f5b\") " pod="openstack/nova-scheduler-0" Feb 16 21:19:21 crc kubenswrapper[4805]: I0216 21:19:21.997115 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrc5p\" (UniqueName: \"kubernetes.io/projected/0818f43c-cd3e-4a45-9970-d9efedc87f5b-kube-api-access-mrc5p\") pod \"nova-scheduler-0\" (UID: \"0818f43c-cd3e-4a45-9970-d9efedc87f5b\") " pod="openstack/nova-scheduler-0" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.082499 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.525973 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.590455 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-config-data\") pod \"1640d816-9924-451c-b2fd-21abd0975ef8\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.590503 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxlqm\" (UniqueName: \"kubernetes.io/projected/1640d816-9924-451c-b2fd-21abd0975ef8-kube-api-access-mxlqm\") pod \"1640d816-9924-451c-b2fd-21abd0975ef8\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.590594 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-combined-ca-bundle\") pod \"1640d816-9924-451c-b2fd-21abd0975ef8\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.590753 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640d816-9924-451c-b2fd-21abd0975ef8-logs\") pod \"1640d816-9924-451c-b2fd-21abd0975ef8\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.590874 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-nova-metadata-tls-certs\") pod \"1640d816-9924-451c-b2fd-21abd0975ef8\" (UID: \"1640d816-9924-451c-b2fd-21abd0975ef8\") " Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.593326 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1640d816-9924-451c-b2fd-21abd0975ef8-logs" (OuterVolumeSpecName: "logs") pod "1640d816-9924-451c-b2fd-21abd0975ef8" (UID: "1640d816-9924-451c-b2fd-21abd0975ef8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.600861 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1640d816-9924-451c-b2fd-21abd0975ef8-kube-api-access-mxlqm" (OuterVolumeSpecName: "kube-api-access-mxlqm") pod "1640d816-9924-451c-b2fd-21abd0975ef8" (UID: "1640d816-9924-451c-b2fd-21abd0975ef8"). InnerVolumeSpecName "kube-api-access-mxlqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.662784 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-config-data" (OuterVolumeSpecName: "config-data") pod "1640d816-9924-451c-b2fd-21abd0975ef8" (UID: "1640d816-9924-451c-b2fd-21abd0975ef8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.667618 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1640d816-9924-451c-b2fd-21abd0975ef8" (UID: "1640d816-9924-451c-b2fd-21abd0975ef8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.678480 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.680012 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1640d816-9924-451c-b2fd-21abd0975ef8" (UID: "1640d816-9924-451c-b2fd-21abd0975ef8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.693320 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.693568 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.693578 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxlqm\" (UniqueName: \"kubernetes.io/projected/1640d816-9924-451c-b2fd-21abd0975ef8-kube-api-access-mxlqm\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.693586 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640d816-9924-451c-b2fd-21abd0975ef8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.693595 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640d816-9924-451c-b2fd-21abd0975ef8-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.703232 4805 generic.go:334] "Generic (PLEG): container finished" podID="1640d816-9924-451c-b2fd-21abd0975ef8" containerID="fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c" exitCode=0 Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.703264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1640d816-9924-451c-b2fd-21abd0975ef8","Type":"ContainerDied","Data":"fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c"} Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.703289 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1640d816-9924-451c-b2fd-21abd0975ef8","Type":"ContainerDied","Data":"dff16f4207f2da99ff749d5970e05751cd27589be1551c35a4d7ec11d69e27af"} Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.703305 4805 scope.go:117] "RemoveContainer" containerID="fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.703436 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.835521 4805 scope.go:117] "RemoveContainer" containerID="5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.859654 4805 scope.go:117] "RemoveContainer" containerID="fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c" Feb 16 21:19:22 crc kubenswrapper[4805]: E0216 21:19:22.862410 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c\": container with ID starting with fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c not found: ID does not exist" containerID="fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.862449 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c"} err="failed to get container status \"fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c\": rpc error: code = NotFound desc = could not find container \"fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c\": container with ID starting with fa90a484b81854b980834277292546efcd9271b448b03af4a8cd205a052a663c not found: ID does not exist" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.862473 4805 scope.go:117] "RemoveContainer" containerID="5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2" Feb 16 21:19:22 crc kubenswrapper[4805]: E0216 21:19:22.862743 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2\": container with ID starting with 5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2 not found: ID does not exist" containerID="5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.862764 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2"} err="failed to get container status \"5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2\": rpc error: code = NotFound desc = could not find container \"5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2\": container with ID starting with 5e27adb3d62a5537862576f939a699f7ce8aede72c404ad25bdcb18d79e77bd2 not found: ID does not exist" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.865017 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.875113 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.892310 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:19:22 crc kubenswrapper[4805]: E0216 21:19:22.892814 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-log" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.892832 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-log" Feb 16 21:19:22 crc kubenswrapper[4805]: E0216 21:19:22.892866 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-metadata" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.892874 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-metadata" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.893085 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-metadata" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.893104 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" containerName="nova-metadata-log" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.894306 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.897949 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.897978 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.911815 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.999447 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec05129-5695-43b4-af95-d5335dc56879-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.999493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec05129-5695-43b4-af95-d5335dc56879-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.999521 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s4dl\" (UniqueName: \"kubernetes.io/projected/3ec05129-5695-43b4-af95-d5335dc56879-kube-api-access-4s4dl\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.999573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec05129-5695-43b4-af95-d5335dc56879-config-data\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:22 crc kubenswrapper[4805]: I0216 21:19:22.999635 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ec05129-5695-43b4-af95-d5335dc56879-logs\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.102323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ec05129-5695-43b4-af95-d5335dc56879-logs\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.102802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ec05129-5695-43b4-af95-d5335dc56879-logs\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.102842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec05129-5695-43b4-af95-d5335dc56879-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.102883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec05129-5695-43b4-af95-d5335dc56879-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.102934 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s4dl\" (UniqueName: \"kubernetes.io/projected/3ec05129-5695-43b4-af95-d5335dc56879-kube-api-access-4s4dl\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.103086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec05129-5695-43b4-af95-d5335dc56879-config-data\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.107599 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec05129-5695-43b4-af95-d5335dc56879-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.108130 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec05129-5695-43b4-af95-d5335dc56879-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.108242 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec05129-5695-43b4-af95-d5335dc56879-config-data\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.118999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s4dl\" (UniqueName: \"kubernetes.io/projected/3ec05129-5695-43b4-af95-d5335dc56879-kube-api-access-4s4dl\") pod \"nova-metadata-0\" (UID: \"3ec05129-5695-43b4-af95-d5335dc56879\") " pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.210914 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.622057 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1640d816-9924-451c-b2fd-21abd0975ef8" path="/var/lib/kubelet/pods/1640d816-9924-451c-b2fd-21abd0975ef8/volumes" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.623662 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92697e5a-1dd1-40ea-9b55-82b01bef5a3f" path="/var/lib/kubelet/pods/92697e5a-1dd1-40ea-9b55-82b01bef5a3f/volumes" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.716217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0818f43c-cd3e-4a45-9970-d9efedc87f5b","Type":"ContainerStarted","Data":"a39e51ad5aca808d45d1aaf8ae204b2d3d7b4b11df30ae1c193f0b29f4d885f4"} Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.717176 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0818f43c-cd3e-4a45-9970-d9efedc87f5b","Type":"ContainerStarted","Data":"6b848db3625db5a24b9a0a017b1dc1a65d40d721d60578c0216d6f1a54efba24"} Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.748892 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.748875593 podStartE2EDuration="2.748875593s" podCreationTimestamp="2026-02-16 21:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:19:23.743020846 +0000 UTC m=+1381.561704141" watchObservedRunningTime="2026-02-16 21:19:23.748875593 +0000 UTC m=+1381.567558878" Feb 16 21:19:23 crc kubenswrapper[4805]: I0216 21:19:23.785148 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:19:24 crc kubenswrapper[4805]: I0216 21:19:24.738367 4805 generic.go:334] "Generic (PLEG): container finished" podID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerID="e265e652065c04b8ceffcfbd27e62f0d3003d22ae0c07be6378fbf712456415f" exitCode=0 Feb 16 21:19:24 crc kubenswrapper[4805]: I0216 21:19:24.738444 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"677259c6-ee93-4b2e-88fd-c09b276c3626","Type":"ContainerDied","Data":"e265e652065c04b8ceffcfbd27e62f0d3003d22ae0c07be6378fbf712456415f"} Feb 16 21:19:24 crc kubenswrapper[4805]: I0216 21:19:24.746040 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ec05129-5695-43b4-af95-d5335dc56879","Type":"ContainerStarted","Data":"9b818c28409836b658fad5d3c1b9020f5fee4568ec0fbe8ea8ea0968cfb421fa"} Feb 16 21:19:24 crc kubenswrapper[4805]: I0216 21:19:24.746083 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ec05129-5695-43b4-af95-d5335dc56879","Type":"ContainerStarted","Data":"158af939f65ece9e558fa3872bc44bcda7a98e621d7359627ce31fc9d49b38bd"} Feb 16 21:19:24 crc kubenswrapper[4805]: I0216 21:19:24.746094 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ec05129-5695-43b4-af95-d5335dc56879","Type":"ContainerStarted","Data":"daefb7526c4b3ce1e175a0dfaad6e9de781fe7a3216aff6c44fe10b3b743d38d"} Feb 16 21:19:24 crc kubenswrapper[4805]: I0216 21:19:24.774135 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.774115772 podStartE2EDuration="2.774115772s" podCreationTimestamp="2026-02-16 21:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:19:24.767056362 +0000 UTC m=+1382.585739657" watchObservedRunningTime="2026-02-16 21:19:24.774115772 +0000 UTC m=+1382.592799067" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.041880 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.178286 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkrzk\" (UniqueName: \"kubernetes.io/projected/677259c6-ee93-4b2e-88fd-c09b276c3626-kube-api-access-dkrzk\") pod \"677259c6-ee93-4b2e-88fd-c09b276c3626\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.178428 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-config-data\") pod \"677259c6-ee93-4b2e-88fd-c09b276c3626\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.178469 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-internal-tls-certs\") pod \"677259c6-ee93-4b2e-88fd-c09b276c3626\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.178575 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-combined-ca-bundle\") pod \"677259c6-ee93-4b2e-88fd-c09b276c3626\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.178767 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677259c6-ee93-4b2e-88fd-c09b276c3626-logs\") pod \"677259c6-ee93-4b2e-88fd-c09b276c3626\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.178938 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-public-tls-certs\") pod \"677259c6-ee93-4b2e-88fd-c09b276c3626\" (UID: \"677259c6-ee93-4b2e-88fd-c09b276c3626\") " Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.179180 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/677259c6-ee93-4b2e-88fd-c09b276c3626-logs" (OuterVolumeSpecName: "logs") pod "677259c6-ee93-4b2e-88fd-c09b276c3626" (UID: "677259c6-ee93-4b2e-88fd-c09b276c3626"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.179964 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677259c6-ee93-4b2e-88fd-c09b276c3626-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.184980 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677259c6-ee93-4b2e-88fd-c09b276c3626-kube-api-access-dkrzk" (OuterVolumeSpecName: "kube-api-access-dkrzk") pod "677259c6-ee93-4b2e-88fd-c09b276c3626" (UID: "677259c6-ee93-4b2e-88fd-c09b276c3626"). InnerVolumeSpecName "kube-api-access-dkrzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.212823 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-config-data" (OuterVolumeSpecName: "config-data") pod "677259c6-ee93-4b2e-88fd-c09b276c3626" (UID: "677259c6-ee93-4b2e-88fd-c09b276c3626"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.231030 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "677259c6-ee93-4b2e-88fd-c09b276c3626" (UID: "677259c6-ee93-4b2e-88fd-c09b276c3626"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.242394 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "677259c6-ee93-4b2e-88fd-c09b276c3626" (UID: "677259c6-ee93-4b2e-88fd-c09b276c3626"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.249926 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "677259c6-ee93-4b2e-88fd-c09b276c3626" (UID: "677259c6-ee93-4b2e-88fd-c09b276c3626"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.283363 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkrzk\" (UniqueName: \"kubernetes.io/projected/677259c6-ee93-4b2e-88fd-c09b276c3626-kube-api-access-dkrzk\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.283425 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.283436 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.283444 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.283452 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677259c6-ee93-4b2e-88fd-c09b276c3626-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.759312 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.759358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"677259c6-ee93-4b2e-88fd-c09b276c3626","Type":"ContainerDied","Data":"f40120787dd29c3cc966ba5d94000638c0f059685d26467e30bc864f113ab9ab"} Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.759741 4805 scope.go:117] "RemoveContainer" containerID="e265e652065c04b8ceffcfbd27e62f0d3003d22ae0c07be6378fbf712456415f" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.802272 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.807249 4805 scope.go:117] "RemoveContainer" containerID="bb0ecd220b0d7f379e1864936eb6bdafd5954ec87f38a56b08165ef0c1ff7783" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.823521 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.844344 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:25 crc kubenswrapper[4805]: E0216 21:19:25.844916 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-api" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.844941 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-api" Feb 16 21:19:25 crc kubenswrapper[4805]: E0216 21:19:25.844988 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-log" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.844997 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-log" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.845251 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-log" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.845296 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" containerName="nova-api-api" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.846615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.848294 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.849073 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.849496 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.873099 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.999627 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.999749 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkfrq\" (UniqueName: \"kubernetes.io/projected/203349b1-a943-4795-ad7a-b5bd48435b86-kube-api-access-vkfrq\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.999809 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-internal-tls-certs\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.999872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-config-data\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:25 crc kubenswrapper[4805]: I0216 21:19:25.999897 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203349b1-a943-4795-ad7a-b5bd48435b86-logs\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:25.999940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-public-tls-certs\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.101514 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-public-tls-certs\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.101613 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.101711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkfrq\" (UniqueName: \"kubernetes.io/projected/203349b1-a943-4795-ad7a-b5bd48435b86-kube-api-access-vkfrq\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.101815 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-internal-tls-certs\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.101869 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-config-data\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.101903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203349b1-a943-4795-ad7a-b5bd48435b86-logs\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.102470 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203349b1-a943-4795-ad7a-b5bd48435b86-logs\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.106599 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-config-data\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.106738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.106829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-public-tls-certs\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.117318 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/203349b1-a943-4795-ad7a-b5bd48435b86-internal-tls-certs\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.129340 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkfrq\" (UniqueName: \"kubernetes.io/projected/203349b1-a943-4795-ad7a-b5bd48435b86-kube-api-access-vkfrq\") pod \"nova-api-0\" (UID: \"203349b1-a943-4795-ad7a-b5bd48435b86\") " pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.176363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.720040 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:19:26 crc kubenswrapper[4805]: I0216 21:19:26.771552 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"203349b1-a943-4795-ad7a-b5bd48435b86","Type":"ContainerStarted","Data":"2f4d2e1aba0c5e0b407356103c91020e0bde33eeb1bbe35e3867b50215232d74"} Feb 16 21:19:27 crc kubenswrapper[4805]: I0216 21:19:27.098174 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 21:19:27 crc kubenswrapper[4805]: I0216 21:19:27.620357 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="677259c6-ee93-4b2e-88fd-c09b276c3626" path="/var/lib/kubelet/pods/677259c6-ee93-4b2e-88fd-c09b276c3626/volumes" Feb 16 21:19:27 crc kubenswrapper[4805]: I0216 21:19:27.834288 4805 generic.go:334] "Generic (PLEG): container finished" podID="52add033-f900-449f-a793-bed363692402" containerID="c499819ce413d7af9d1e217f801dfb6867d4dff0a8f61937a1fa7b8981a46c4e" exitCode=137 Feb 16 21:19:27 crc kubenswrapper[4805]: I0216 21:19:27.834368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerDied","Data":"c499819ce413d7af9d1e217f801dfb6867d4dff0a8f61937a1fa7b8981a46c4e"} Feb 16 21:19:27 crc kubenswrapper[4805]: I0216 21:19:27.839478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"203349b1-a943-4795-ad7a-b5bd48435b86","Type":"ContainerStarted","Data":"560c05aac93951503241e0a69c54a4b3b26b91dfa15a1a3dbb1e35687057efe1"} Feb 16 21:19:27 crc kubenswrapper[4805]: I0216 21:19:27.839702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"203349b1-a943-4795-ad7a-b5bd48435b86","Type":"ContainerStarted","Data":"fc6588c8b645ccb6c2559c9435bb33296fecb4bfb6d28bc73552f5b6942541bd"} Feb 16 21:19:27 crc kubenswrapper[4805]: I0216 21:19:27.895813 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.895586678 podStartE2EDuration="2.895586678s" podCreationTimestamp="2026-02-16 21:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:19:27.879192098 +0000 UTC m=+1385.697875393" watchObservedRunningTime="2026-02-16 21:19:27.895586678 +0000 UTC m=+1385.714269973" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.211683 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.212010 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.239092 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.382666 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-config-data\") pod \"52add033-f900-449f-a793-bed363692402\" (UID: \"52add033-f900-449f-a793-bed363692402\") " Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.383059 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-scripts\") pod \"52add033-f900-449f-a793-bed363692402\" (UID: \"52add033-f900-449f-a793-bed363692402\") " Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.383320 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrvw7\" (UniqueName: \"kubernetes.io/projected/52add033-f900-449f-a793-bed363692402-kube-api-access-rrvw7\") pod \"52add033-f900-449f-a793-bed363692402\" (UID: \"52add033-f900-449f-a793-bed363692402\") " Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.383479 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-combined-ca-bundle\") pod \"52add033-f900-449f-a793-bed363692402\" (UID: \"52add033-f900-449f-a793-bed363692402\") " Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.392301 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52add033-f900-449f-a793-bed363692402-kube-api-access-rrvw7" (OuterVolumeSpecName: "kube-api-access-rrvw7") pod "52add033-f900-449f-a793-bed363692402" (UID: "52add033-f900-449f-a793-bed363692402"). InnerVolumeSpecName "kube-api-access-rrvw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.427178 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-scripts" (OuterVolumeSpecName: "scripts") pod "52add033-f900-449f-a793-bed363692402" (UID: "52add033-f900-449f-a793-bed363692402"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.487105 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.487151 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrvw7\" (UniqueName: \"kubernetes.io/projected/52add033-f900-449f-a793-bed363692402-kube-api-access-rrvw7\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.513341 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52add033-f900-449f-a793-bed363692402" (UID: "52add033-f900-449f-a793-bed363692402"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.525209 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-config-data" (OuterVolumeSpecName: "config-data") pod "52add033-f900-449f-a793-bed363692402" (UID: "52add033-f900-449f-a793-bed363692402"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.589402 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.589987 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52add033-f900-449f-a793-bed363692402-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.858216 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.865676 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"52add033-f900-449f-a793-bed363692402","Type":"ContainerDied","Data":"2ae86fc06ab1f362cb0d30be8da2695e2a4a8deaf2fabe2fb8a9f5c5c0c4b260"} Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.865807 4805 scope.go:117] "RemoveContainer" containerID="c499819ce413d7af9d1e217f801dfb6867d4dff0a8f61937a1fa7b8981a46c4e" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.912972 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.936732 4805 scope.go:117] "RemoveContainer" containerID="0347c5e11743da60f38dd839acda381bd17e18f1dc5d354f7c10302bfee8ceed" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.947844 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.963700 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 21:19:28 crc kubenswrapper[4805]: E0216 21:19:28.964399 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-evaluator" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.964425 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-evaluator" Feb 16 21:19:28 crc kubenswrapper[4805]: E0216 21:19:28.964445 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-notifier" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.964454 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-notifier" Feb 16 21:19:28 crc kubenswrapper[4805]: E0216 21:19:28.964476 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-api" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.964485 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-api" Feb 16 21:19:28 crc kubenswrapper[4805]: E0216 21:19:28.964529 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-listener" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.964539 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-listener" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.964894 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-evaluator" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.964920 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-listener" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.964937 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-api" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.964953 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="52add033-f900-449f-a793-bed363692402" containerName="aodh-notifier" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.968864 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.971217 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-8mrcb" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.972662 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.974114 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.980546 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.982195 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.982440 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 16 21:19:28 crc kubenswrapper[4805]: I0216 21:19:28.993620 4805 scope.go:117] "RemoveContainer" containerID="6fd31cd7da6dd5521470bb4f12f11f2ab2eee3f5575319475b37cb11c463df04" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.025363 4805 scope.go:117] "RemoveContainer" containerID="7a78482f088aaddd4a08ddbac4f4f23383170fd402187473bc5e68d9509d11c2" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.033440 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-scripts\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.135808 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.135859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-scripts\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.135919 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqchz\" (UniqueName: \"kubernetes.io/projected/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-kube-api-access-xqchz\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.136054 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.136107 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-config-data\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.136135 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-public-tls-certs\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.140637 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-scripts\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.238287 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-public-tls-certs\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.238410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.238465 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqchz\" (UniqueName: \"kubernetes.io/projected/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-kube-api-access-xqchz\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.238647 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.238752 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-config-data\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.242127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-public-tls-certs\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.242773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-config-data\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.243062 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.243081 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.258883 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqchz\" (UniqueName: \"kubernetes.io/projected/e7b0acc2-1c23-4182-85ca-3ab0293b64a0-kube-api-access-xqchz\") pod \"aodh-0\" (UID: \"e7b0acc2-1c23-4182-85ca-3ab0293b64a0\") " pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.306344 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.614514 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52add033-f900-449f-a793-bed363692402" path="/var/lib/kubelet/pods/52add033-f900-449f-a793-bed363692402/volumes" Feb 16 21:19:29 crc kubenswrapper[4805]: W0216 21:19:29.781151 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7b0acc2_1c23_4182_85ca_3ab0293b64a0.slice/crio-b112e398a2396dc98d4ff95189709b01ed6c95be0e3cf8c0ac7118ea807181a7 WatchSource:0}: Error finding container b112e398a2396dc98d4ff95189709b01ed6c95be0e3cf8c0ac7118ea807181a7: Status 404 returned error can't find the container with id b112e398a2396dc98d4ff95189709b01ed6c95be0e3cf8c0ac7118ea807181a7 Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.797771 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 21:19:29 crc kubenswrapper[4805]: I0216 21:19:29.874077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7b0acc2-1c23-4182-85ca-3ab0293b64a0","Type":"ContainerStarted","Data":"b112e398a2396dc98d4ff95189709b01ed6c95be0e3cf8c0ac7118ea807181a7"} Feb 16 21:19:30 crc kubenswrapper[4805]: I0216 21:19:30.889079 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7b0acc2-1c23-4182-85ca-3ab0293b64a0","Type":"ContainerStarted","Data":"0cb456424fe1a463f848c89c32fd8fe75bd2c399e1b0e4a16fe70c2d5a4e528a"} Feb 16 21:19:31 crc kubenswrapper[4805]: I0216 21:19:31.904957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7b0acc2-1c23-4182-85ca-3ab0293b64a0","Type":"ContainerStarted","Data":"6dc6afd1ff2e64c904388b8701ec45499884259ebeb8d7a77ef623bba45d360e"} Feb 16 21:19:32 crc kubenswrapper[4805]: I0216 21:19:32.083733 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 21:19:32 crc kubenswrapper[4805]: I0216 21:19:32.119453 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 21:19:32 crc kubenswrapper[4805]: I0216 21:19:32.922158 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7b0acc2-1c23-4182-85ca-3ab0293b64a0","Type":"ContainerStarted","Data":"c499e8efe62c3afc8f7bc42297ffdee0d8428380a9d07f32ae4ae7bc8ad28cc8"} Feb 16 21:19:32 crc kubenswrapper[4805]: I0216 21:19:32.922847 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7b0acc2-1c23-4182-85ca-3ab0293b64a0","Type":"ContainerStarted","Data":"160a8fe322485690266a24384b9a5374ad3696394ce2bfeddde8ae3476057990"} Feb 16 21:19:32 crc kubenswrapper[4805]: I0216 21:19:32.948407 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.4559118460000002 podStartE2EDuration="4.948379092s" podCreationTimestamp="2026-02-16 21:19:28 +0000 UTC" firstStartedPulling="2026-02-16 21:19:29.783701016 +0000 UTC m=+1387.602384311" lastFinishedPulling="2026-02-16 21:19:32.276168272 +0000 UTC m=+1390.094851557" observedRunningTime="2026-02-16 21:19:32.945656939 +0000 UTC m=+1390.764340244" watchObservedRunningTime="2026-02-16 21:19:32.948379092 +0000 UTC m=+1390.767062397" Feb 16 21:19:32 crc kubenswrapper[4805]: I0216 21:19:32.988634 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.211014 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.211156 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.620283 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qd6k5"] Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.623299 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.639089 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qd6k5"] Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.742460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-utilities\") pod \"redhat-operators-qd6k5\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.742510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kts5\" (UniqueName: \"kubernetes.io/projected/2b81905d-b8bf-4f25-a1e4-e08d71909833-kube-api-access-7kts5\") pod \"redhat-operators-qd6k5\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.743550 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-catalog-content\") pod \"redhat-operators-qd6k5\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.846180 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-catalog-content\") pod \"redhat-operators-qd6k5\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.846304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-utilities\") pod \"redhat-operators-qd6k5\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.846327 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kts5\" (UniqueName: \"kubernetes.io/projected/2b81905d-b8bf-4f25-a1e4-e08d71909833-kube-api-access-7kts5\") pod \"redhat-operators-qd6k5\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.846920 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-catalog-content\") pod \"redhat-operators-qd6k5\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.846976 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-utilities\") pod \"redhat-operators-qd6k5\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.864926 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kts5\" (UniqueName: \"kubernetes.io/projected/2b81905d-b8bf-4f25-a1e4-e08d71909833-kube-api-access-7kts5\") pod \"redhat-operators-qd6k5\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:33 crc kubenswrapper[4805]: I0216 21:19:33.948539 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:34 crc kubenswrapper[4805]: I0216 21:19:34.236065 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3ec05129-5695-43b4-af95-d5335dc56879" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:19:34 crc kubenswrapper[4805]: I0216 21:19:34.236477 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3ec05129-5695-43b4-af95-d5335dc56879" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:19:34 crc kubenswrapper[4805]: I0216 21:19:34.365362 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qd6k5"] Feb 16 21:19:34 crc kubenswrapper[4805]: I0216 21:19:34.946553 4805 generic.go:334] "Generic (PLEG): container finished" podID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerID="c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679" exitCode=0 Feb 16 21:19:34 crc kubenswrapper[4805]: I0216 21:19:34.946671 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6k5" event={"ID":"2b81905d-b8bf-4f25-a1e4-e08d71909833","Type":"ContainerDied","Data":"c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679"} Feb 16 21:19:34 crc kubenswrapper[4805]: I0216 21:19:34.947152 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6k5" event={"ID":"2b81905d-b8bf-4f25-a1e4-e08d71909833","Type":"ContainerStarted","Data":"5063d54a94de18b0a25312a3938dbbd05a08373aec4f6c889775993bdf492af6"} Feb 16 21:19:35 crc kubenswrapper[4805]: I0216 21:19:35.959132 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6k5" event={"ID":"2b81905d-b8bf-4f25-a1e4-e08d71909833","Type":"ContainerStarted","Data":"b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3"} Feb 16 21:19:36 crc kubenswrapper[4805]: I0216 21:19:36.177601 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:19:36 crc kubenswrapper[4805]: I0216 21:19:36.178063 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:19:37 crc kubenswrapper[4805]: I0216 21:19:37.018040 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 21:19:37 crc kubenswrapper[4805]: I0216 21:19:37.193853 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="203349b1-a943-4795-ad7a-b5bd48435b86" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:19:37 crc kubenswrapper[4805]: I0216 21:19:37.193928 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="203349b1-a943-4795-ad7a-b5bd48435b86" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:19:41 crc kubenswrapper[4805]: I0216 21:19:41.025875 4805 generic.go:334] "Generic (PLEG): container finished" podID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerID="b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3" exitCode=0 Feb 16 21:19:41 crc kubenswrapper[4805]: I0216 21:19:41.026028 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6k5" event={"ID":"2b81905d-b8bf-4f25-a1e4-e08d71909833","Type":"ContainerDied","Data":"b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3"} Feb 16 21:19:41 crc kubenswrapper[4805]: I0216 21:19:41.765284 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:19:41 crc kubenswrapper[4805]: I0216 21:19:41.765820 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd" containerName="kube-state-metrics" containerID="cri-o://33484dc35b67d5539f060def9b4ee2eac83b9d86c0cc5a9d1ea82a3904506c8f" gracePeriod=30 Feb 16 21:19:41 crc kubenswrapper[4805]: I0216 21:19:41.868038 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:19:41 crc kubenswrapper[4805]: I0216 21:19:41.868305 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9" containerName="mysqld-exporter" containerID="cri-o://13d9c9217278264ab7874693bdfa449c8e4cd5d1bf29646fef06cef800c79de9" gracePeriod=30 Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.066081 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6k5" event={"ID":"2b81905d-b8bf-4f25-a1e4-e08d71909833","Type":"ContainerStarted","Data":"094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974"} Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.069866 4805 generic.go:334] "Generic (PLEG): container finished" podID="9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9" containerID="13d9c9217278264ab7874693bdfa449c8e4cd5d1bf29646fef06cef800c79de9" exitCode=2 Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.069932 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9","Type":"ContainerDied","Data":"13d9c9217278264ab7874693bdfa449c8e4cd5d1bf29646fef06cef800c79de9"} Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.073985 4805 generic.go:334] "Generic (PLEG): container finished" podID="9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd" containerID="33484dc35b67d5539f060def9b4ee2eac83b9d86c0cc5a9d1ea82a3904506c8f" exitCode=2 Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.074030 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd","Type":"ContainerDied","Data":"33484dc35b67d5539f060def9b4ee2eac83b9d86c0cc5a9d1ea82a3904506c8f"} Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.100049 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qd6k5" podStartSLOduration=2.5822834869999998 podStartE2EDuration="9.100032098s" podCreationTimestamp="2026-02-16 21:19:33 +0000 UTC" firstStartedPulling="2026-02-16 21:19:34.948285534 +0000 UTC m=+1392.766968829" lastFinishedPulling="2026-02-16 21:19:41.466034145 +0000 UTC m=+1399.284717440" observedRunningTime="2026-02-16 21:19:42.095969639 +0000 UTC m=+1399.914652934" watchObservedRunningTime="2026-02-16 21:19:42.100032098 +0000 UTC m=+1399.918715393" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.549443 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.674244 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wc57\" (UniqueName: \"kubernetes.io/projected/9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd-kube-api-access-2wc57\") pod \"9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd\" (UID: \"9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd\") " Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.694825 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd-kube-api-access-2wc57" (OuterVolumeSpecName: "kube-api-access-2wc57") pod "9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd" (UID: "9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd"). InnerVolumeSpecName "kube-api-access-2wc57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.705815 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.776462 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vwr2\" (UniqueName: \"kubernetes.io/projected/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-kube-api-access-6vwr2\") pod \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.776531 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-config-data\") pod \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.776819 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-combined-ca-bundle\") pod \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\" (UID: \"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9\") " Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.777358 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wc57\" (UniqueName: \"kubernetes.io/projected/9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd-kube-api-access-2wc57\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.799635 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-kube-api-access-6vwr2" (OuterVolumeSpecName: "kube-api-access-6vwr2") pod "9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9" (UID: "9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9"). InnerVolumeSpecName "kube-api-access-6vwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.842566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9" (UID: "9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.867918 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-config-data" (OuterVolumeSpecName: "config-data") pod "9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9" (UID: "9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.880685 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vwr2\" (UniqueName: \"kubernetes.io/projected/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-kube-api-access-6vwr2\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.880731 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:42 crc kubenswrapper[4805]: I0216 21:19:42.880741 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.086140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9","Type":"ContainerDied","Data":"3204afaa1daa57b07cc7eb6b3bacd49d64672afcf655cc4dfc546b7b0616d570"} Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.086533 4805 scope.go:117] "RemoveContainer" containerID="13d9c9217278264ab7874693bdfa449c8e4cd5d1bf29646fef06cef800c79de9" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.086739 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.089933 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd","Type":"ContainerDied","Data":"fc70b4c7bea5abb72e9355a883f4cbc19ce14a68cf0c8a723778dc50f5022ce8"} Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.089981 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.124613 4805 scope.go:117] "RemoveContainer" containerID="33484dc35b67d5539f060def9b4ee2eac83b9d86c0cc5a9d1ea82a3904506c8f" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.139243 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.164397 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.192776 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:19:43 crc kubenswrapper[4805]: E0216 21:19:43.193500 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9" containerName="mysqld-exporter" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.193574 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9" containerName="mysqld-exporter" Feb 16 21:19:43 crc kubenswrapper[4805]: E0216 21:19:43.193649 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd" containerName="kube-state-metrics" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.193699 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd" containerName="kube-state-metrics" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.193977 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd" containerName="kube-state-metrics" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.194053 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9" containerName="mysqld-exporter" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.194967 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.197855 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.198170 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.200436 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.214855 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.225191 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.227512 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.229446 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.242118 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.242973 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.243012 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.248101 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.261112 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.276313 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.289282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8bz2\" (UniqueName: \"kubernetes.io/projected/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-kube-api-access-g8bz2\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.289428 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.289511 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.289555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-config-data\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.391438 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2699bf95-c138-4388-9aca-256620ea3458-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.391489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2699bf95-c138-4388-9aca-256620ea3458-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.391643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.391931 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.392006 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2699bf95-c138-4388-9aca-256620ea3458-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.392062 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-config-data\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.392130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm9rn\" (UniqueName: \"kubernetes.io/projected/2699bf95-c138-4388-9aca-256620ea3458-kube-api-access-qm9rn\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.392164 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8bz2\" (UniqueName: \"kubernetes.io/projected/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-kube-api-access-g8bz2\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.396078 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-config-data\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.396847 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.397797 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.418898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8bz2\" (UniqueName: \"kubernetes.io/projected/2a1dcf12-f32a-4822-9458-aa0a10e4afbf-kube-api-access-g8bz2\") pod \"mysqld-exporter-0\" (UID: \"2a1dcf12-f32a-4822-9458-aa0a10e4afbf\") " pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.494601 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2699bf95-c138-4388-9aca-256620ea3458-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.494658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2699bf95-c138-4388-9aca-256620ea3458-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.494790 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2699bf95-c138-4388-9aca-256620ea3458-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.494838 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm9rn\" (UniqueName: \"kubernetes.io/projected/2699bf95-c138-4388-9aca-256620ea3458-kube-api-access-qm9rn\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.498414 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2699bf95-c138-4388-9aca-256620ea3458-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.498458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2699bf95-c138-4388-9aca-256620ea3458-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.499114 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2699bf95-c138-4388-9aca-256620ea3458-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.509408 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm9rn\" (UniqueName: \"kubernetes.io/projected/2699bf95-c138-4388-9aca-256620ea3458-kube-api-access-qm9rn\") pod \"kube-state-metrics-0\" (UID: \"2699bf95-c138-4388-9aca-256620ea3458\") " pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.514329 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.551266 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.630009 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9" path="/var/lib/kubelet/pods/9b916d44-b7c1-46d5-ac74-5bcdc3c13fc9/volumes" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.631475 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd" path="/var/lib/kubelet/pods/9ec3132d-f0fa-44bd-9b6d-fa0c92cc99cd/volumes" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.950964 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:43 crc kubenswrapper[4805]: I0216 21:19:43.951277 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:19:44 crc kubenswrapper[4805]: I0216 21:19:44.095366 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:19:44 crc kubenswrapper[4805]: I0216 21:19:44.128755 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:19:44 crc kubenswrapper[4805]: I0216 21:19:44.239781 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:19:44 crc kubenswrapper[4805]: I0216 21:19:44.545321 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:44 crc kubenswrapper[4805]: I0216 21:19:44.545649 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="ceilometer-central-agent" containerID="cri-o://85ae6985528391bd4cdf6b22d739738fe409c31f69ec4645a7c9c42d1a392311" gracePeriod=30 Feb 16 21:19:44 crc kubenswrapper[4805]: I0216 21:19:44.545673 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="proxy-httpd" containerID="cri-o://eb0b0d694244c5dce5fe6c07a502b3aadcd1c2efd5a829e3831533a682709ce8" gracePeriod=30 Feb 16 21:19:44 crc kubenswrapper[4805]: I0216 21:19:44.545785 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="sg-core" containerID="cri-o://1876e419f77841d8a39822a34ad54814d35104c5a2cc91450e8710fb67cb192a" gracePeriod=30 Feb 16 21:19:44 crc kubenswrapper[4805]: I0216 21:19:44.545807 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="ceilometer-notification-agent" containerID="cri-o://2cce352f3e3c73bb1702753a7aec416982d12e1ea833c9f2696564f33c58ab64" gracePeriod=30 Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.013130 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qd6k5" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="registry-server" probeResult="failure" output=< Feb 16 21:19:45 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:19:45 crc kubenswrapper[4805]: > Feb 16 21:19:45 crc kubenswrapper[4805]: E0216 21:19:45.097056 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0ea1cbd_4507_4c9e_9ae8_968046c89287.slice/crio-conmon-eb0b0d694244c5dce5fe6c07a502b3aadcd1c2efd5a829e3831533a682709ce8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0ea1cbd_4507_4c9e_9ae8_968046c89287.slice/crio-85ae6985528391bd4cdf6b22d739738fe409c31f69ec4645a7c9c42d1a392311.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0ea1cbd_4507_4c9e_9ae8_968046c89287.slice/crio-eb0b0d694244c5dce5fe6c07a502b3aadcd1c2efd5a829e3831533a682709ce8.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.133611 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2699bf95-c138-4388-9aca-256620ea3458","Type":"ContainerStarted","Data":"b631d899a5645b12f99e12247615f03c0c409a4ca5097061a744b37f78e41696"} Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.133978 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2699bf95-c138-4388-9aca-256620ea3458","Type":"ContainerStarted","Data":"21f86df14d842bf6a71497a1b7ecb5c4b91e176d4379ba2f2da7540a66cfdaf6"} Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.134407 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.148383 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2a1dcf12-f32a-4822-9458-aa0a10e4afbf","Type":"ContainerStarted","Data":"310a92f842f8a13e5882939a87855c61edbc80c507acfdae881d9acab3f91918"} Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.148430 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2a1dcf12-f32a-4822-9458-aa0a10e4afbf","Type":"ContainerStarted","Data":"49a5c3e441bf1f44c8a45c4c0d1337f91c00b682e25027de41759d9a38841ecf"} Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.165407 4805 generic.go:334] "Generic (PLEG): container finished" podID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerID="eb0b0d694244c5dce5fe6c07a502b3aadcd1c2efd5a829e3831533a682709ce8" exitCode=0 Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.165443 4805 generic.go:334] "Generic (PLEG): container finished" podID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerID="1876e419f77841d8a39822a34ad54814d35104c5a2cc91450e8710fb67cb192a" exitCode=2 Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.165457 4805 generic.go:334] "Generic (PLEG): container finished" podID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerID="85ae6985528391bd4cdf6b22d739738fe409c31f69ec4645a7c9c42d1a392311" exitCode=0 Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.165535 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerDied","Data":"eb0b0d694244c5dce5fe6c07a502b3aadcd1c2efd5a829e3831533a682709ce8"} Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.165583 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerDied","Data":"1876e419f77841d8a39822a34ad54814d35104c5a2cc91450e8710fb67cb192a"} Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.165594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerDied","Data":"85ae6985528391bd4cdf6b22d739738fe409c31f69ec4645a7c9c42d1a392311"} Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.169247 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.727237012 podStartE2EDuration="2.169233121s" podCreationTimestamp="2026-02-16 21:19:43 +0000 UTC" firstStartedPulling="2026-02-16 21:19:44.184023276 +0000 UTC m=+1402.002706571" lastFinishedPulling="2026-02-16 21:19:44.626019385 +0000 UTC m=+1402.444702680" observedRunningTime="2026-02-16 21:19:45.158462741 +0000 UTC m=+1402.977146036" watchObservedRunningTime="2026-02-16 21:19:45.169233121 +0000 UTC m=+1402.987916416" Feb 16 21:19:45 crc kubenswrapper[4805]: I0216 21:19:45.179489 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.638920321 podStartE2EDuration="2.179470205s" podCreationTimestamp="2026-02-16 21:19:43 +0000 UTC" firstStartedPulling="2026-02-16 21:19:44.103240997 +0000 UTC m=+1401.921924312" lastFinishedPulling="2026-02-16 21:19:44.643790901 +0000 UTC m=+1402.462474196" observedRunningTime="2026-02-16 21:19:45.175247322 +0000 UTC m=+1402.993930617" watchObservedRunningTime="2026-02-16 21:19:45.179470205 +0000 UTC m=+1402.998153500" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.197093 4805 generic.go:334] "Generic (PLEG): container finished" podID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerID="2cce352f3e3c73bb1702753a7aec416982d12e1ea833c9f2696564f33c58ab64" exitCode=0 Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.198300 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerDied","Data":"2cce352f3e3c73bb1702753a7aec416982d12e1ea833c9f2696564f33c58ab64"} Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.199128 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.200018 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.225514 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.226517 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.444603 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.595431 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-scripts\") pod \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.595825 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-log-httpd\") pod \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.595915 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-sg-core-conf-yaml\") pod \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.595943 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9r7k\" (UniqueName: \"kubernetes.io/projected/f0ea1cbd-4507-4c9e-9ae8-968046c89287-kube-api-access-b9r7k\") pod \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.596007 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-combined-ca-bundle\") pod \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.596095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-run-httpd\") pod \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.596139 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-config-data\") pod \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\" (UID: \"f0ea1cbd-4507-4c9e-9ae8-968046c89287\") " Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.596541 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f0ea1cbd-4507-4c9e-9ae8-968046c89287" (UID: "f0ea1cbd-4507-4c9e-9ae8-968046c89287"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.596685 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f0ea1cbd-4507-4c9e-9ae8-968046c89287" (UID: "f0ea1cbd-4507-4c9e-9ae8-968046c89287"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.596965 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.602240 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0ea1cbd-4507-4c9e-9ae8-968046c89287-kube-api-access-b9r7k" (OuterVolumeSpecName: "kube-api-access-b9r7k") pod "f0ea1cbd-4507-4c9e-9ae8-968046c89287" (UID: "f0ea1cbd-4507-4c9e-9ae8-968046c89287"). InnerVolumeSpecName "kube-api-access-b9r7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.603867 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-scripts" (OuterVolumeSpecName: "scripts") pod "f0ea1cbd-4507-4c9e-9ae8-968046c89287" (UID: "f0ea1cbd-4507-4c9e-9ae8-968046c89287"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.628687 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f0ea1cbd-4507-4c9e-9ae8-968046c89287" (UID: "f0ea1cbd-4507-4c9e-9ae8-968046c89287"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.699137 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.699167 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0ea1cbd-4507-4c9e-9ae8-968046c89287-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.699176 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.699187 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9r7k\" (UniqueName: \"kubernetes.io/projected/f0ea1cbd-4507-4c9e-9ae8-968046c89287-kube-api-access-b9r7k\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.702138 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0ea1cbd-4507-4c9e-9ae8-968046c89287" (UID: "f0ea1cbd-4507-4c9e-9ae8-968046c89287"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.754902 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-config-data" (OuterVolumeSpecName: "config-data") pod "f0ea1cbd-4507-4c9e-9ae8-968046c89287" (UID: "f0ea1cbd-4507-4c9e-9ae8-968046c89287"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.801149 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:46 crc kubenswrapper[4805]: I0216 21:19:46.801494 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ea1cbd-4507-4c9e-9ae8-968046c89287-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.218690 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.223456 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0ea1cbd-4507-4c9e-9ae8-968046c89287","Type":"ContainerDied","Data":"2c9a3dec57aa3df45f393ef7fb5d2ee7bcb8ef3a038190e9ab1538b9d420a5bb"} Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.223520 4805 scope.go:117] "RemoveContainer" containerID="eb0b0d694244c5dce5fe6c07a502b3aadcd1c2efd5a829e3831533a682709ce8" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.223801 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.238358 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.270507 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.276640 4805 scope.go:117] "RemoveContainer" containerID="1876e419f77841d8a39822a34ad54814d35104c5a2cc91450e8710fb67cb192a" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.292374 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.310827 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:47 crc kubenswrapper[4805]: E0216 21:19:47.311323 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="sg-core" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.311335 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="sg-core" Feb 16 21:19:47 crc kubenswrapper[4805]: E0216 21:19:47.311359 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="ceilometer-central-agent" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.311366 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="ceilometer-central-agent" Feb 16 21:19:47 crc kubenswrapper[4805]: E0216 21:19:47.311377 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="ceilometer-notification-agent" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.311383 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="ceilometer-notification-agent" Feb 16 21:19:47 crc kubenswrapper[4805]: E0216 21:19:47.311393 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="proxy-httpd" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.311399 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="proxy-httpd" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.311614 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="ceilometer-notification-agent" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.311630 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="sg-core" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.311646 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="ceilometer-central-agent" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.311657 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" containerName="proxy-httpd" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.313666 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.325996 4805 scope.go:117] "RemoveContainer" containerID="2cce352f3e3c73bb1702753a7aec416982d12e1ea833c9f2696564f33c58ab64" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.326282 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.326485 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.327144 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.331185 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.381871 4805 scope.go:117] "RemoveContainer" containerID="85ae6985528391bd4cdf6b22d739738fe409c31f69ec4645a7c9c42d1a392311" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.420344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-log-httpd\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.420593 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.421853 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-config-data\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.422104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjchb\" (UniqueName: \"kubernetes.io/projected/9864a3bb-9191-4f84-94b2-779089518622-kube-api-access-xjchb\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.422163 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-scripts\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.422198 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.422212 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-run-httpd\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.422408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.524763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-scripts\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.524813 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.524833 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-run-httpd\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.524909 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.525007 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-log-httpd\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.525029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.525046 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-config-data\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.525103 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjchb\" (UniqueName: \"kubernetes.io/projected/9864a3bb-9191-4f84-94b2-779089518622-kube-api-access-xjchb\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.528807 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-run-httpd\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.529905 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-log-httpd\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.531281 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-scripts\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.531978 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.532974 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.537525 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.548135 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjchb\" (UniqueName: \"kubernetes.io/projected/9864a3bb-9191-4f84-94b2-779089518622-kube-api-access-xjchb\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.550831 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-config-data\") pod \"ceilometer-0\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " pod="openstack/ceilometer-0" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.612003 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0ea1cbd-4507-4c9e-9ae8-968046c89287" path="/var/lib/kubelet/pods/f0ea1cbd-4507-4c9e-9ae8-968046c89287/volumes" Feb 16 21:19:47 crc kubenswrapper[4805]: I0216 21:19:47.652486 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:19:48 crc kubenswrapper[4805]: W0216 21:19:48.154953 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9864a3bb_9191_4f84_94b2_779089518622.slice/crio-8136459d2abffbb8facbb907aadae0e125772cb245a2955bac11824ab94e3dea WatchSource:0}: Error finding container 8136459d2abffbb8facbb907aadae0e125772cb245a2955bac11824ab94e3dea: Status 404 returned error can't find the container with id 8136459d2abffbb8facbb907aadae0e125772cb245a2955bac11824ab94e3dea Feb 16 21:19:48 crc kubenswrapper[4805]: I0216 21:19:48.157104 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:19:48 crc kubenswrapper[4805]: I0216 21:19:48.240444 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerStarted","Data":"8136459d2abffbb8facbb907aadae0e125772cb245a2955bac11824ab94e3dea"} Feb 16 21:19:49 crc kubenswrapper[4805]: I0216 21:19:49.275588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerStarted","Data":"36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2"} Feb 16 21:19:50 crc kubenswrapper[4805]: I0216 21:19:50.307453 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerStarted","Data":"cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5"} Feb 16 21:19:51 crc kubenswrapper[4805]: I0216 21:19:51.320326 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerStarted","Data":"f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b"} Feb 16 21:19:53 crc kubenswrapper[4805]: I0216 21:19:53.346036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerStarted","Data":"56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683"} Feb 16 21:19:53 crc kubenswrapper[4805]: I0216 21:19:53.346659 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:19:53 crc kubenswrapper[4805]: I0216 21:19:53.370043 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.474152603 podStartE2EDuration="6.370028433s" podCreationTimestamp="2026-02-16 21:19:47 +0000 UTC" firstStartedPulling="2026-02-16 21:19:48.157752726 +0000 UTC m=+1405.976436011" lastFinishedPulling="2026-02-16 21:19:52.053628546 +0000 UTC m=+1409.872311841" observedRunningTime="2026-02-16 21:19:53.367633249 +0000 UTC m=+1411.186316564" watchObservedRunningTime="2026-02-16 21:19:53.370028433 +0000 UTC m=+1411.188711728" Feb 16 21:19:53 crc kubenswrapper[4805]: I0216 21:19:53.571332 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 21:19:55 crc kubenswrapper[4805]: I0216 21:19:55.006359 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qd6k5" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="registry-server" probeResult="failure" output=< Feb 16 21:19:55 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:19:55 crc kubenswrapper[4805]: > Feb 16 21:20:04 crc kubenswrapper[4805]: I0216 21:20:04.006389 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:20:04 crc kubenswrapper[4805]: I0216 21:20:04.062714 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:20:04 crc kubenswrapper[4805]: I0216 21:20:04.831056 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qd6k5"] Feb 16 21:20:05 crc kubenswrapper[4805]: I0216 21:20:05.519820 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qd6k5" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="registry-server" containerID="cri-o://094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974" gracePeriod=2 Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.086648 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.204370 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kts5\" (UniqueName: \"kubernetes.io/projected/2b81905d-b8bf-4f25-a1e4-e08d71909833-kube-api-access-7kts5\") pod \"2b81905d-b8bf-4f25-a1e4-e08d71909833\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.204419 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-catalog-content\") pod \"2b81905d-b8bf-4f25-a1e4-e08d71909833\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.204482 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-utilities\") pod \"2b81905d-b8bf-4f25-a1e4-e08d71909833\" (UID: \"2b81905d-b8bf-4f25-a1e4-e08d71909833\") " Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.206115 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-utilities" (OuterVolumeSpecName: "utilities") pod "2b81905d-b8bf-4f25-a1e4-e08d71909833" (UID: "2b81905d-b8bf-4f25-a1e4-e08d71909833"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.214846 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b81905d-b8bf-4f25-a1e4-e08d71909833-kube-api-access-7kts5" (OuterVolumeSpecName: "kube-api-access-7kts5") pod "2b81905d-b8bf-4f25-a1e4-e08d71909833" (UID: "2b81905d-b8bf-4f25-a1e4-e08d71909833"). InnerVolumeSpecName "kube-api-access-7kts5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.307394 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kts5\" (UniqueName: \"kubernetes.io/projected/2b81905d-b8bf-4f25-a1e4-e08d71909833-kube-api-access-7kts5\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.307605 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.343782 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b81905d-b8bf-4f25-a1e4-e08d71909833" (UID: "2b81905d-b8bf-4f25-a1e4-e08d71909833"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.410037 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b81905d-b8bf-4f25-a1e4-e08d71909833-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.534914 4805 generic.go:334] "Generic (PLEG): container finished" podID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerID="094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974" exitCode=0 Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.534963 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6k5" event={"ID":"2b81905d-b8bf-4f25-a1e4-e08d71909833","Type":"ContainerDied","Data":"094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974"} Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.534998 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qd6k5" event={"ID":"2b81905d-b8bf-4f25-a1e4-e08d71909833","Type":"ContainerDied","Data":"5063d54a94de18b0a25312a3938dbbd05a08373aec4f6c889775993bdf492af6"} Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.535016 4805 scope.go:117] "RemoveContainer" containerID="094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.536250 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qd6k5" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.565046 4805 scope.go:117] "RemoveContainer" containerID="b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.575460 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qd6k5"] Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.585630 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qd6k5"] Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.606021 4805 scope.go:117] "RemoveContainer" containerID="c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.641382 4805 scope.go:117] "RemoveContainer" containerID="094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974" Feb 16 21:20:06 crc kubenswrapper[4805]: E0216 21:20:06.642050 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974\": container with ID starting with 094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974 not found: ID does not exist" containerID="094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.642084 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974"} err="failed to get container status \"094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974\": rpc error: code = NotFound desc = could not find container \"094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974\": container with ID starting with 094a579f1d1f05183171403857f57ed5ee77ff2e05453b5af9f9558a6843a974 not found: ID does not exist" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.642114 4805 scope.go:117] "RemoveContainer" containerID="b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3" Feb 16 21:20:06 crc kubenswrapper[4805]: E0216 21:20:06.642448 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3\": container with ID starting with b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3 not found: ID does not exist" containerID="b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.642475 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3"} err="failed to get container status \"b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3\": rpc error: code = NotFound desc = could not find container \"b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3\": container with ID starting with b5f775667f8984fa41e7b16072dc6d0dc99c6687100a8cff9d2a317f9abb13e3 not found: ID does not exist" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.642491 4805 scope.go:117] "RemoveContainer" containerID="c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679" Feb 16 21:20:06 crc kubenswrapper[4805]: E0216 21:20:06.642977 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679\": container with ID starting with c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679 not found: ID does not exist" containerID="c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679" Feb 16 21:20:06 crc kubenswrapper[4805]: I0216 21:20:06.643036 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679"} err="failed to get container status \"c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679\": rpc error: code = NotFound desc = could not find container \"c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679\": container with ID starting with c41cd922c454e7963b2f1cfdc619a436c73cb3f5e27a1de27b67e795b13f7679 not found: ID does not exist" Feb 16 21:20:07 crc kubenswrapper[4805]: I0216 21:20:07.610944 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" path="/var/lib/kubelet/pods/2b81905d-b8bf-4f25-a1e4-e08d71909833/volumes" Feb 16 21:20:08 crc kubenswrapper[4805]: I0216 21:20:08.099475 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:20:08 crc kubenswrapper[4805]: I0216 21:20:08.099570 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:20:17 crc kubenswrapper[4805]: I0216 21:20:17.671220 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.687530 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-qs466"] Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.699969 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-qs466"] Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.789804 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-m2jhm"] Feb 16 21:20:29 crc kubenswrapper[4805]: E0216 21:20:29.790991 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="registry-server" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.791021 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="registry-server" Feb 16 21:20:29 crc kubenswrapper[4805]: E0216 21:20:29.791062 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="extract-content" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.791075 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="extract-content" Feb 16 21:20:29 crc kubenswrapper[4805]: E0216 21:20:29.791109 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="extract-utilities" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.791123 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="extract-utilities" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.791581 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b81905d-b8bf-4f25-a1e4-e08d71909833" containerName="registry-server" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.793067 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.801771 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-m2jhm"] Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.838662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a75265-a8ae-4b0a-9719-085d3361edb7-combined-ca-bundle\") pod \"heat-db-sync-m2jhm\" (UID: \"f1a75265-a8ae-4b0a-9719-085d3361edb7\") " pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.838742 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl89q\" (UniqueName: \"kubernetes.io/projected/f1a75265-a8ae-4b0a-9719-085d3361edb7-kube-api-access-cl89q\") pod \"heat-db-sync-m2jhm\" (UID: \"f1a75265-a8ae-4b0a-9719-085d3361edb7\") " pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.838837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a75265-a8ae-4b0a-9719-085d3361edb7-config-data\") pod \"heat-db-sync-m2jhm\" (UID: \"f1a75265-a8ae-4b0a-9719-085d3361edb7\") " pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.941347 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a75265-a8ae-4b0a-9719-085d3361edb7-combined-ca-bundle\") pod \"heat-db-sync-m2jhm\" (UID: \"f1a75265-a8ae-4b0a-9719-085d3361edb7\") " pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.941389 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl89q\" (UniqueName: \"kubernetes.io/projected/f1a75265-a8ae-4b0a-9719-085d3361edb7-kube-api-access-cl89q\") pod \"heat-db-sync-m2jhm\" (UID: \"f1a75265-a8ae-4b0a-9719-085d3361edb7\") " pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.941456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a75265-a8ae-4b0a-9719-085d3361edb7-config-data\") pod \"heat-db-sync-m2jhm\" (UID: \"f1a75265-a8ae-4b0a-9719-085d3361edb7\") " pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.950772 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a75265-a8ae-4b0a-9719-085d3361edb7-config-data\") pod \"heat-db-sync-m2jhm\" (UID: \"f1a75265-a8ae-4b0a-9719-085d3361edb7\") " pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.955390 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a75265-a8ae-4b0a-9719-085d3361edb7-combined-ca-bundle\") pod \"heat-db-sync-m2jhm\" (UID: \"f1a75265-a8ae-4b0a-9719-085d3361edb7\") " pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:29 crc kubenswrapper[4805]: I0216 21:20:29.958669 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl89q\" (UniqueName: \"kubernetes.io/projected/f1a75265-a8ae-4b0a-9719-085d3361edb7-kube-api-access-cl89q\") pod \"heat-db-sync-m2jhm\" (UID: \"f1a75265-a8ae-4b0a-9719-085d3361edb7\") " pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:30 crc kubenswrapper[4805]: I0216 21:20:30.132641 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-m2jhm" Feb 16 21:20:30 crc kubenswrapper[4805]: I0216 21:20:30.632028 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-m2jhm"] Feb 16 21:20:30 crc kubenswrapper[4805]: I0216 21:20:30.632180 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:20:30 crc kubenswrapper[4805]: E0216 21:20:30.783876 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:20:30 crc kubenswrapper[4805]: E0216 21:20:30.784132 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:20:30 crc kubenswrapper[4805]: E0216 21:20:30.784242 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:20:30 crc kubenswrapper[4805]: E0216 21:20:30.788809 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:20:30 crc kubenswrapper[4805]: I0216 21:20:30.860207 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-m2jhm" event={"ID":"f1a75265-a8ae-4b0a-9719-085d3361edb7","Type":"ContainerStarted","Data":"811645be0b114572da15114fb7c35a9d61666bb68796ec9c659cb079ec7e5908"} Feb 16 21:20:30 crc kubenswrapper[4805]: E0216 21:20:30.861471 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:20:31 crc kubenswrapper[4805]: I0216 21:20:31.571540 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:20:31 crc kubenswrapper[4805]: I0216 21:20:31.627858 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b" path="/var/lib/kubelet/pods/fe1ec9fe-bc8c-47c8-a720-2e64cb0da40b/volumes" Feb 16 21:20:31 crc kubenswrapper[4805]: E0216 21:20:31.872072 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.331164 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.331501 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="ceilometer-central-agent" containerID="cri-o://36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2" gracePeriod=30 Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.331588 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="ceilometer-notification-agent" containerID="cri-o://cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5" gracePeriod=30 Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.331596 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="proxy-httpd" containerID="cri-o://56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683" gracePeriod=30 Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.331579 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="sg-core" containerID="cri-o://f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b" gracePeriod=30 Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.585153 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.883463 4805 generic.go:334] "Generic (PLEG): container finished" podID="9864a3bb-9191-4f84-94b2-779089518622" containerID="56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683" exitCode=0 Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.883817 4805 generic.go:334] "Generic (PLEG): container finished" podID="9864a3bb-9191-4f84-94b2-779089518622" containerID="f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b" exitCode=2 Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.883546 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerDied","Data":"56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683"} Feb 16 21:20:32 crc kubenswrapper[4805]: I0216 21:20:32.883860 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerDied","Data":"f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b"} Feb 16 21:20:33 crc kubenswrapper[4805]: I0216 21:20:33.896072 4805 generic.go:334] "Generic (PLEG): container finished" podID="9864a3bb-9191-4f84-94b2-779089518622" containerID="36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2" exitCode=0 Feb 16 21:20:33 crc kubenswrapper[4805]: I0216 21:20:33.896116 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerDied","Data":"36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2"} Feb 16 21:20:36 crc kubenswrapper[4805]: I0216 21:20:36.290462 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="95a93760-333e-4689-a64c-c3534a04cec0" containerName="rabbitmq" containerID="cri-o://5e7b36a5647fdf2ee5ecfce9eb3f96cc0f4ba00eab7f3453e540f0d09a432559" gracePeriod=604796 Feb 16 21:20:36 crc kubenswrapper[4805]: I0216 21:20:36.867151 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7f897110-86a6-4edb-a453-a1322e0a580f" containerName="rabbitmq" containerID="cri-o://4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272" gracePeriod=604796 Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.099835 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.101267 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.365995 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="95a93760-333e-4689-a64c-c3534a04cec0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.500627 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.625714 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7f897110-86a6-4edb-a453-a1322e0a580f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.643343 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-config-data\") pod \"9864a3bb-9191-4f84-94b2-779089518622\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.643523 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjchb\" (UniqueName: \"kubernetes.io/projected/9864a3bb-9191-4f84-94b2-779089518622-kube-api-access-xjchb\") pod \"9864a3bb-9191-4f84-94b2-779089518622\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.643556 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-ceilometer-tls-certs\") pod \"9864a3bb-9191-4f84-94b2-779089518622\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.643575 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-combined-ca-bundle\") pod \"9864a3bb-9191-4f84-94b2-779089518622\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.643668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-scripts\") pod \"9864a3bb-9191-4f84-94b2-779089518622\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.650118 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9864a3bb-9191-4f84-94b2-779089518622-kube-api-access-xjchb" (OuterVolumeSpecName: "kube-api-access-xjchb") pod "9864a3bb-9191-4f84-94b2-779089518622" (UID: "9864a3bb-9191-4f84-94b2-779089518622"). InnerVolumeSpecName "kube-api-access-xjchb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.650368 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-sg-core-conf-yaml\") pod \"9864a3bb-9191-4f84-94b2-779089518622\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.650836 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-run-httpd\") pod \"9864a3bb-9191-4f84-94b2-779089518622\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.651126 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-log-httpd\") pod \"9864a3bb-9191-4f84-94b2-779089518622\" (UID: \"9864a3bb-9191-4f84-94b2-779089518622\") " Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.651315 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9864a3bb-9191-4f84-94b2-779089518622" (UID: "9864a3bb-9191-4f84-94b2-779089518622"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.652277 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9864a3bb-9191-4f84-94b2-779089518622" (UID: "9864a3bb-9191-4f84-94b2-779089518622"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.655202 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.655233 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9864a3bb-9191-4f84-94b2-779089518622-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.655247 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjchb\" (UniqueName: \"kubernetes.io/projected/9864a3bb-9191-4f84-94b2-779089518622-kube-api-access-xjchb\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.663509 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-scripts" (OuterVolumeSpecName: "scripts") pod "9864a3bb-9191-4f84-94b2-779089518622" (UID: "9864a3bb-9191-4f84-94b2-779089518622"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.690189 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9864a3bb-9191-4f84-94b2-779089518622" (UID: "9864a3bb-9191-4f84-94b2-779089518622"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.727590 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9864a3bb-9191-4f84-94b2-779089518622" (UID: "9864a3bb-9191-4f84-94b2-779089518622"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.744903 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9864a3bb-9191-4f84-94b2-779089518622" (UID: "9864a3bb-9191-4f84-94b2-779089518622"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.758377 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.758408 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.758420 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.758431 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.783125 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-config-data" (OuterVolumeSpecName: "config-data") pod "9864a3bb-9191-4f84-94b2-779089518622" (UID: "9864a3bb-9191-4f84-94b2-779089518622"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.861002 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9864a3bb-9191-4f84-94b2-779089518622-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.953015 4805 generic.go:334] "Generic (PLEG): container finished" podID="9864a3bb-9191-4f84-94b2-779089518622" containerID="cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5" exitCode=0 Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.953068 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerDied","Data":"cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5"} Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.953105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9864a3bb-9191-4f84-94b2-779089518622","Type":"ContainerDied","Data":"8136459d2abffbb8facbb907aadae0e125772cb245a2955bac11824ab94e3dea"} Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.953116 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:20:38 crc kubenswrapper[4805]: I0216 21:20:38.953126 4805 scope.go:117] "RemoveContainer" containerID="56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.030626 4805 scope.go:117] "RemoveContainer" containerID="f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.050042 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.063453 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.066215 4805 scope.go:117] "RemoveContainer" containerID="cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.073636 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:20:39 crc kubenswrapper[4805]: E0216 21:20:39.074267 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="proxy-httpd" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.074293 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="proxy-httpd" Feb 16 21:20:39 crc kubenswrapper[4805]: E0216 21:20:39.074330 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="ceilometer-central-agent" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.074340 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="ceilometer-central-agent" Feb 16 21:20:39 crc kubenswrapper[4805]: E0216 21:20:39.074355 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="ceilometer-notification-agent" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.074365 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="ceilometer-notification-agent" Feb 16 21:20:39 crc kubenswrapper[4805]: E0216 21:20:39.074395 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="sg-core" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.074405 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="sg-core" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.074676 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="ceilometer-notification-agent" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.074710 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="sg-core" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.074737 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="ceilometer-central-agent" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.074770 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9864a3bb-9191-4f84-94b2-779089518622" containerName="proxy-httpd" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.077456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.094983 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.095254 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.095425 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.141637 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.154029 4805 scope.go:117] "RemoveContainer" containerID="36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.168177 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.168229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-scripts\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.168859 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.169103 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-run-httpd\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.169143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-config-data\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.169354 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpz2\" (UniqueName: \"kubernetes.io/projected/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-kube-api-access-2vpz2\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.169543 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-log-httpd\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.169800 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.183860 4805 scope.go:117] "RemoveContainer" containerID="56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683" Feb 16 21:20:39 crc kubenswrapper[4805]: E0216 21:20:39.184381 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683\": container with ID starting with 56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683 not found: ID does not exist" containerID="56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.184445 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683"} err="failed to get container status \"56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683\": rpc error: code = NotFound desc = could not find container \"56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683\": container with ID starting with 56ba9ad5934682a2dda995c4170b8265415992445506f0b108625e7a64c06683 not found: ID does not exist" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.184482 4805 scope.go:117] "RemoveContainer" containerID="f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b" Feb 16 21:20:39 crc kubenswrapper[4805]: E0216 21:20:39.184799 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b\": container with ID starting with f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b not found: ID does not exist" containerID="f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.184949 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b"} err="failed to get container status \"f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b\": rpc error: code = NotFound desc = could not find container \"f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b\": container with ID starting with f7e616abbaa974364c3a113e28915554fc0aa552f5ba98b85180b0664dc0c83b not found: ID does not exist" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.185097 4805 scope.go:117] "RemoveContainer" containerID="cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5" Feb 16 21:20:39 crc kubenswrapper[4805]: E0216 21:20:39.185586 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5\": container with ID starting with cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5 not found: ID does not exist" containerID="cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.185663 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5"} err="failed to get container status \"cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5\": rpc error: code = NotFound desc = could not find container \"cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5\": container with ID starting with cb01e520b125aa747a741ec4f93a0589028719faa1560a668a96551944940ce5 not found: ID does not exist" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.185780 4805 scope.go:117] "RemoveContainer" containerID="36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2" Feb 16 21:20:39 crc kubenswrapper[4805]: E0216 21:20:39.186092 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2\": container with ID starting with 36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2 not found: ID does not exist" containerID="36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.186175 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2"} err="failed to get container status \"36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2\": rpc error: code = NotFound desc = could not find container \"36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2\": container with ID starting with 36487694ca8aba0b7afe081c407334f969a9cab0d4f55ca98919341e122705e2 not found: ID does not exist" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.279422 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.279479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-scripts\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.279768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.279826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-config-data\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.279842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-run-httpd\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.279900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vpz2\" (UniqueName: \"kubernetes.io/projected/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-kube-api-access-2vpz2\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.279963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-log-httpd\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.280029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.280930 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-run-httpd\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.281003 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-log-httpd\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.285376 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.285899 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.288096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-config-data\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.289830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.295999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-scripts\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.298034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vpz2\" (UniqueName: \"kubernetes.io/projected/f2bbe998-2ee6-4b84-b723-42b1c4381ebc-kube-api-access-2vpz2\") pod \"ceilometer-0\" (UID: \"f2bbe998-2ee6-4b84-b723-42b1c4381ebc\") " pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.409046 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.621662 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9864a3bb-9191-4f84-94b2-779089518622" path="/var/lib/kubelet/pods/9864a3bb-9191-4f84-94b2-779089518622/volumes" Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.923695 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:20:39 crc kubenswrapper[4805]: I0216 21:20:39.969082 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bbe998-2ee6-4b84-b723-42b1c4381ebc","Type":"ContainerStarted","Data":"0adb9360e9fde3a922e0d70c12bfaa67ba927ce20c69d91b97c4f47895a2df84"} Feb 16 21:20:40 crc kubenswrapper[4805]: E0216 21:20:40.051570 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:20:40 crc kubenswrapper[4805]: E0216 21:20:40.051674 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:20:40 crc kubenswrapper[4805]: E0216 21:20:40.051953 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:20:41 crc kubenswrapper[4805]: I0216 21:20:41.995250 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bbe998-2ee6-4b84-b723-42b1c4381ebc","Type":"ContainerStarted","Data":"f87ffd03d36ba95b785d4fbed218da3cc7082810493661188c729ed6a5c69f6f"} Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.018937 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bbe998-2ee6-4b84-b723-42b1c4381ebc","Type":"ContainerStarted","Data":"f10eee83f36b3c4d57775f3b3046fb04a929195c02802f88284b653d72988e89"} Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.026239 4805 generic.go:334] "Generic (PLEG): container finished" podID="95a93760-333e-4689-a64c-c3534a04cec0" containerID="5e7b36a5647fdf2ee5ecfce9eb3f96cc0f4ba00eab7f3453e540f0d09a432559" exitCode=0 Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.026293 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"95a93760-333e-4689-a64c-c3534a04cec0","Type":"ContainerDied","Data":"5e7b36a5647fdf2ee5ecfce9eb3f96cc0f4ba00eab7f3453e540f0d09a432559"} Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.165821 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.292998 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-plugins\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.293327 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-server-conf\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.293444 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-erlang-cookie\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.293505 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.294397 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.294548 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.294660 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-plugins-conf\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.294810 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-confd\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.294927 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjkdn\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-kube-api-access-sjkdn\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.295046 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95a93760-333e-4689-a64c-c3534a04cec0-erlang-cookie-secret\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.295080 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.295637 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-config-data\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.296146 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95a93760-333e-4689-a64c-c3534a04cec0-pod-info\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.296263 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-tls\") pod \"95a93760-333e-4689-a64c-c3534a04cec0\" (UID: \"95a93760-333e-4689-a64c-c3534a04cec0\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.297482 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.297585 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.299267 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.302881 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/95a93760-333e-4689-a64c-c3534a04cec0-pod-info" (OuterVolumeSpecName: "pod-info") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.303213 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a93760-333e-4689-a64c-c3534a04cec0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.303901 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-kube-api-access-sjkdn" (OuterVolumeSpecName: "kube-api-access-sjkdn") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "kube-api-access-sjkdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.306097 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.372876 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483" (OuterVolumeSpecName: "persistence") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "pvc-146d831a-5abd-464e-ad27-980da7be7483". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.387163 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-config-data" (OuterVolumeSpecName: "config-data") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.401157 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") on node \"crc\" " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.401185 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjkdn\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-kube-api-access-sjkdn\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.401196 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95a93760-333e-4689-a64c-c3534a04cec0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.401204 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.401213 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95a93760-333e-4689-a64c-c3534a04cec0-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.401220 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.508130 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.508472 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-146d831a-5abd-464e-ad27-980da7be7483" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483") on node "crc" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.509697 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.525175 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.528433 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-server-conf" (OuterVolumeSpecName: "server-conf") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.611790 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "95a93760-333e-4689-a64c-c3534a04cec0" (UID: "95a93760-333e-4689-a64c-c3534a04cec0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614227 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614290 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-confd\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614348 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f897110-86a6-4edb-a453-a1322e0a580f-erlang-cookie-secret\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614403 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-config-data\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614437 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-plugins-conf\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614530 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28cdd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-kube-api-access-28cdd\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614551 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f897110-86a6-4edb-a453-a1322e0a580f-pod-info\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614587 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-server-conf\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614646 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-erlang-cookie\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614704 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-plugins\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.614744 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-tls\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.615886 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95a93760-333e-4689-a64c-c3534a04cec0-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.615903 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95a93760-333e-4689-a64c-c3534a04cec0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.616818 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.619684 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.624935 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-kube-api-access-28cdd" (OuterVolumeSpecName: "kube-api-access-28cdd") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "kube-api-access-28cdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.625145 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.627274 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.654596 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7f897110-86a6-4edb-a453-a1322e0a580f-pod-info" (OuterVolumeSpecName: "pod-info") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.656853 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f897110-86a6-4edb-a453-a1322e0a580f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.720500 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817" (OuterVolumeSpecName: "persistence") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: E0216 21:20:43.720955 4805 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/7f897110-86a6-4edb-a453-a1322e0a580f/volumes/kubernetes.io~csi/pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/7f897110-86a6-4edb-a453-a1322e0a580f/volumes/kubernetes.io~csi/pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817/vol_data.json]: open /var/lib/kubelet/pods/7f897110-86a6-4edb-a453-a1322e0a580f/volumes/kubernetes.io~csi/pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"7f897110-86a6-4edb-a453-a1322e0a580f\" (UID: \"7f897110-86a6-4edb-a453-a1322e0a580f\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/7f897110-86a6-4edb-a453-a1322e0a580f/volumes/kubernetes.io~csi/pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/7f897110-86a6-4edb-a453-a1322e0a580f/volumes/kubernetes.io~csi/pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817/vol_data.json]: open /var/lib/kubelet/pods/7f897110-86a6-4edb-a453-a1322e0a580f/volumes/kubernetes.io~csi/pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817/vol_data.json: no such file or directory" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.721871 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f897110-86a6-4edb-a453-a1322e0a580f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.721891 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.721901 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.721910 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.721930 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") on node \"crc\" " Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.721940 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f897110-86a6-4edb-a453-a1322e0a580f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.721949 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.721958 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28cdd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-kube-api-access-28cdd\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.755307 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-config-data" (OuterVolumeSpecName: "config-data") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.784567 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.784956 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817") on node "crc" Feb 16 21:20:43 crc kubenswrapper[4805]: E0216 21:20:43.785373 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.787225 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-server-conf" (OuterVolumeSpecName: "server-conf") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.823589 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.823627 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.823639 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f897110-86a6-4edb-a453-a1322e0a580f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.861339 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7f897110-86a6-4edb-a453-a1322e0a580f" (UID: "7f897110-86a6-4edb-a453-a1322e0a580f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:43 crc kubenswrapper[4805]: I0216 21:20:43.925270 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f897110-86a6-4edb-a453-a1322e0a580f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.036844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"95a93760-333e-4689-a64c-c3534a04cec0","Type":"ContainerDied","Data":"60a07e3d570d94a4a2c8167f2c76d38beb0cd98ac1679556fc0348cef0903c86"} Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.036890 4805 scope.go:117] "RemoveContainer" containerID="5e7b36a5647fdf2ee5ecfce9eb3f96cc0f4ba00eab7f3453e540f0d09a432559" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.037009 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.042226 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bbe998-2ee6-4b84-b723-42b1c4381ebc","Type":"ContainerStarted","Data":"3537b58e3a0ca53990e6e3db7d052de8740a26121e701feefc5b33da7d1126ca"} Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.043008 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.043907 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.045810 4805 generic.go:334] "Generic (PLEG): container finished" podID="7f897110-86a6-4edb-a453-a1322e0a580f" containerID="4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272" exitCode=0 Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.045836 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f897110-86a6-4edb-a453-a1322e0a580f","Type":"ContainerDied","Data":"4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272"} Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.045851 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f897110-86a6-4edb-a453-a1322e0a580f","Type":"ContainerDied","Data":"679eb0636c8fc2c8a2b4614dc2dcf42c831b8b1f85ea0f796c1682dcb087edbf"} Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.045888 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.065400 4805 scope.go:117] "RemoveContainer" containerID="7aeeed8f72d2e51caa4f2b0119cd92aa83ce279f4caef23c61ee0897a9f4e84f" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.073135 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.086158 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.089030 4805 scope.go:117] "RemoveContainer" containerID="4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.121587 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.122476 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a93760-333e-4689-a64c-c3534a04cec0" containerName="rabbitmq" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.122561 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a93760-333e-4689-a64c-c3534a04cec0" containerName="rabbitmq" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.122637 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f897110-86a6-4edb-a453-a1322e0a580f" containerName="setup-container" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.122712 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f897110-86a6-4edb-a453-a1322e0a580f" containerName="setup-container" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.122921 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f897110-86a6-4edb-a453-a1322e0a580f" containerName="rabbitmq" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.122977 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f897110-86a6-4edb-a453-a1322e0a580f" containerName="rabbitmq" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.123044 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a93760-333e-4689-a64c-c3534a04cec0" containerName="setup-container" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.123100 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a93760-333e-4689-a64c-c3534a04cec0" containerName="setup-container" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.123451 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a93760-333e-4689-a64c-c3534a04cec0" containerName="rabbitmq" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.123556 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f897110-86a6-4edb-a453-a1322e0a580f" containerName="rabbitmq" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.127290 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.187083 4805 scope.go:117] "RemoveContainer" containerID="450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.190355 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.215276 4805 scope.go:117] "RemoveContainer" containerID="4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.215802 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272\": container with ID starting with 4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272 not found: ID does not exist" containerID="4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.215851 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272"} err="failed to get container status \"4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272\": rpc error: code = NotFound desc = could not find container \"4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272\": container with ID starting with 4c2ac06e11337e5f5d8a3d6bcce0e6bbcb4a4fe5d41aace506aee1bafe301272 not found: ID does not exist" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.215878 4805 scope.go:117] "RemoveContainer" containerID="450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.216162 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9\": container with ID starting with 450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9 not found: ID does not exist" containerID="450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.216192 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9"} err="failed to get container status \"450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9\": rpc error: code = NotFound desc = could not find container \"450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9\": container with ID starting with 450ebb5af220b12b1e8676a0aac9e26feb4e4593dbae4a967bd3faaeb97203e9 not found: ID does not exist" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.224450 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.233484 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.233561 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57bae43a-529b-4748-8a58-63b1a1c6db10-server-conf\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.233634 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57bae43a-529b-4748-8a58-63b1a1c6db10-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.233694 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57bae43a-529b-4748-8a58-63b1a1c6db10-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.233884 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fwdk\" (UniqueName: \"kubernetes.io/projected/57bae43a-529b-4748-8a58-63b1a1c6db10-kube-api-access-6fwdk\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.233944 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.233969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.234026 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57bae43a-529b-4748-8a58-63b1a1c6db10-pod-info\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.234099 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57bae43a-529b-4748-8a58-63b1a1c6db10-config-data\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.234153 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.234170 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.241935 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.275509 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.278878 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.280817 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.286296 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.286353 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-84zrc" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.287113 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.287439 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.287116 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.287670 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.291964 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.336345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.336662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee307678-615e-4eaf-be4c-6e44e3a31f27-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.336837 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.336977 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.337109 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57bae43a-529b-4748-8a58-63b1a1c6db10-server-conf\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.337255 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57bae43a-529b-4748-8a58-63b1a1c6db10-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.337357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee307678-615e-4eaf-be4c-6e44e3a31f27-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.337453 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.337564 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57bae43a-529b-4748-8a58-63b1a1c6db10-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.337672 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.337818 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fwdk\" (UniqueName: \"kubernetes.io/projected/57bae43a-529b-4748-8a58-63b1a1c6db10-kube-api-access-6fwdk\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.336894 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.337927 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee307678-615e-4eaf-be4c-6e44e3a31f27-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.338108 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.338205 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.338341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57bae43a-529b-4748-8a58-63b1a1c6db10-pod-info\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.338437 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.338555 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57bae43a-529b-4748-8a58-63b1a1c6db10-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.338685 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.338871 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee307678-615e-4eaf-be4c-6e44e3a31f27-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.339000 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57bae43a-529b-4748-8a58-63b1a1c6db10-config-data\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.339583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqpzd\" (UniqueName: \"kubernetes.io/projected/ee307678-615e-4eaf-be4c-6e44e3a31f27-kube-api-access-vqpzd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.339676 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee307678-615e-4eaf-be4c-6e44e3a31f27-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.339824 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.340131 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57bae43a-529b-4748-8a58-63b1a1c6db10-config-data\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.340236 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.340950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57bae43a-529b-4748-8a58-63b1a1c6db10-server-conf\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.341981 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.342229 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57bae43a-529b-4748-8a58-63b1a1c6db10-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.342924 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.343052 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/74bc4188232fbdfbfb5c6ded44af5120efb3ac2bd3e07d5261392b9eea692a72/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.343274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57bae43a-529b-4748-8a58-63b1a1c6db10-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.344101 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57bae43a-529b-4748-8a58-63b1a1c6db10-pod-info\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.356089 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fwdk\" (UniqueName: \"kubernetes.io/projected/57bae43a-529b-4748-8a58-63b1a1c6db10-kube-api-access-6fwdk\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.400881 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-146d831a-5abd-464e-ad27-980da7be7483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-146d831a-5abd-464e-ad27-980da7be7483\") pod \"rabbitmq-server-2\" (UID: \"57bae43a-529b-4748-8a58-63b1a1c6db10\") " pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqpzd\" (UniqueName: \"kubernetes.io/projected/ee307678-615e-4eaf-be4c-6e44e3a31f27-kube-api-access-vqpzd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442135 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee307678-615e-4eaf-be4c-6e44e3a31f27-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442184 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee307678-615e-4eaf-be4c-6e44e3a31f27-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442290 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee307678-615e-4eaf-be4c-6e44e3a31f27-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442311 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442348 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442381 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee307678-615e-4eaf-be4c-6e44e3a31f27-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442427 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442478 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.442502 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee307678-615e-4eaf-be4c-6e44e3a31f27-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.443305 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee307678-615e-4eaf-be4c-6e44e3a31f27-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.443824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee307678-615e-4eaf-be4c-6e44e3a31f27-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.443975 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.444054 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.444759 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee307678-615e-4eaf-be4c-6e44e3a31f27-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.445746 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.445788 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9531023197cf390bfc7490105fc42faf100fe54366bbc898152126d1b095ba49/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.446510 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee307678-615e-4eaf-be4c-6e44e3a31f27-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.446636 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee307678-615e-4eaf-be4c-6e44e3a31f27-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.447134 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.448132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee307678-615e-4eaf-be4c-6e44e3a31f27-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.458360 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqpzd\" (UniqueName: \"kubernetes.io/projected/ee307678-615e-4eaf-be4c-6e44e3a31f27-kube-api-access-vqpzd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.459004 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.495403 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2063ebd1-48c7-4e38-a982-8a6bb5c9c817\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee307678-615e-4eaf-be4c-6e44e3a31f27\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: I0216 21:20:44.622361 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.718254 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.718626 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.718764 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:20:44 crc kubenswrapper[4805]: E0216 21:20:44.722457 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:20:45 crc kubenswrapper[4805]: I0216 21:20:45.004560 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:20:45 crc kubenswrapper[4805]: W0216 21:20:45.005533 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57bae43a_529b_4748_8a58_63b1a1c6db10.slice/crio-a1e95547f855d4f3f3337d578a5a37c3d1d3d19b000bbc94dac0672618f57a88 WatchSource:0}: Error finding container a1e95547f855d4f3f3337d578a5a37c3d1d3d19b000bbc94dac0672618f57a88: Status 404 returned error can't find the container with id a1e95547f855d4f3f3337d578a5a37c3d1d3d19b000bbc94dac0672618f57a88 Feb 16 21:20:45 crc kubenswrapper[4805]: I0216 21:20:45.112312 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"57bae43a-529b-4748-8a58-63b1a1c6db10","Type":"ContainerStarted","Data":"a1e95547f855d4f3f3337d578a5a37c3d1d3d19b000bbc94dac0672618f57a88"} Feb 16 21:20:45 crc kubenswrapper[4805]: E0216 21:20:45.116433 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:20:45 crc kubenswrapper[4805]: I0216 21:20:45.129288 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:20:45 crc kubenswrapper[4805]: I0216 21:20:45.613768 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f897110-86a6-4edb-a453-a1322e0a580f" path="/var/lib/kubelet/pods/7f897110-86a6-4edb-a453-a1322e0a580f/volumes" Feb 16 21:20:45 crc kubenswrapper[4805]: I0216 21:20:45.615173 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a93760-333e-4689-a64c-c3534a04cec0" path="/var/lib/kubelet/pods/95a93760-333e-4689-a64c-c3534a04cec0/volumes" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.129431 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee307678-615e-4eaf-be4c-6e44e3a31f27","Type":"ContainerStarted","Data":"46c9b7064021a00ab6a59ff3e727d2da471d0c52c4466f0174518f9cf9972c8e"} Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.309247 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dwppj"] Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.319497 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.324115 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.340551 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dwppj"] Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.407072 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.407115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdmqf\" (UniqueName: \"kubernetes.io/projected/22a3b250-0093-4f8a-a827-9bf0242af330-kube-api-access-wdmqf\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.407151 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.407168 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-config\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.407241 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.407291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.407340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.510277 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.510363 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-config\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.510528 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.510651 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.510788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.511018 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.511074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdmqf\" (UniqueName: \"kubernetes.io/projected/22a3b250-0093-4f8a-a827-9bf0242af330-kube-api-access-wdmqf\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.517471 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-config\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.517816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.518089 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.518980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.519978 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.520107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.544618 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdmqf\" (UniqueName: \"kubernetes.io/projected/22a3b250-0093-4f8a-a827-9bf0242af330-kube-api-access-wdmqf\") pod \"dnsmasq-dns-5b75489c6f-dwppj\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:46 crc kubenswrapper[4805]: I0216 21:20:46.659875 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:47 crc kubenswrapper[4805]: I0216 21:20:47.141050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee307678-615e-4eaf-be4c-6e44e3a31f27","Type":"ContainerStarted","Data":"54f7794c9890e75116dbe5c928f9ce135d8b349c674d5d4e2da3804ef45f443e"} Feb 16 21:20:47 crc kubenswrapper[4805]: I0216 21:20:47.232921 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dwppj"] Feb 16 21:20:48 crc kubenswrapper[4805]: I0216 21:20:48.162232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"57bae43a-529b-4748-8a58-63b1a1c6db10","Type":"ContainerStarted","Data":"1ce1fe8a9cc79929f1c7997a88c4f6d1ee20e5fd59b5d881b1cda83ab0b0c3b4"} Feb 16 21:20:48 crc kubenswrapper[4805]: I0216 21:20:48.168527 4805 generic.go:334] "Generic (PLEG): container finished" podID="22a3b250-0093-4f8a-a827-9bf0242af330" containerID="79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd" exitCode=0 Feb 16 21:20:48 crc kubenswrapper[4805]: I0216 21:20:48.169701 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" event={"ID":"22a3b250-0093-4f8a-a827-9bf0242af330","Type":"ContainerDied","Data":"79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd"} Feb 16 21:20:48 crc kubenswrapper[4805]: I0216 21:20:48.169810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" event={"ID":"22a3b250-0093-4f8a-a827-9bf0242af330","Type":"ContainerStarted","Data":"c116abb31241b59f2befef178973d8c2a9e7728358c038f83d1321089f260766"} Feb 16 21:20:49 crc kubenswrapper[4805]: I0216 21:20:49.184264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" event={"ID":"22a3b250-0093-4f8a-a827-9bf0242af330","Type":"ContainerStarted","Data":"97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063"} Feb 16 21:20:49 crc kubenswrapper[4805]: I0216 21:20:49.207235 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" podStartSLOduration=3.207218692 podStartE2EDuration="3.207218692s" podCreationTimestamp="2026-02-16 21:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:20:49.206076561 +0000 UTC m=+1467.024759846" watchObservedRunningTime="2026-02-16 21:20:49.207218692 +0000 UTC m=+1467.025901987" Feb 16 21:20:50 crc kubenswrapper[4805]: I0216 21:20:50.197426 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:56 crc kubenswrapper[4805]: I0216 21:20:56.662647 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:20:56 crc kubenswrapper[4805]: I0216 21:20:56.745325 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-lpsc5"] Feb 16 21:20:56 crc kubenswrapper[4805]: I0216 21:20:56.745553 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" podUID="a0874a96-7e2d-4cf2-847f-50d9b97704eb" containerName="dnsmasq-dns" containerID="cri-o://5570b75c2abbcaab944e1429f72c20e8a04af4c2cb31f4121fd8486bd08bd1c6" gracePeriod=10 Feb 16 21:20:56 crc kubenswrapper[4805]: I0216 21:20:56.930790 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-fncpn"] Feb 16 21:20:56 crc kubenswrapper[4805]: I0216 21:20:56.932773 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:56 crc kubenswrapper[4805]: E0216 21:20:56.937673 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0874a96_7e2d_4cf2_847f_50d9b97704eb.slice/crio-conmon-5570b75c2abbcaab944e1429f72c20e8a04af4c2cb31f4121fd8486bd08bd1c6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0874a96_7e2d_4cf2_847f_50d9b97704eb.slice/crio-5570b75c2abbcaab944e1429f72c20e8a04af4c2cb31f4121fd8486bd08bd1c6.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:20:56 crc kubenswrapper[4805]: I0216 21:20:56.952176 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-fncpn"] Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.010460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.010556 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.010582 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.010653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.010672 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.010997 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-config\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.011104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgq9j\" (UniqueName: \"kubernetes.io/projected/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-kube-api-access-mgq9j\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.112628 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.112888 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.112925 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.112944 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.113004 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-config\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.113028 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgq9j\" (UniqueName: \"kubernetes.io/projected/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-kube-api-access-mgq9j\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.113133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.113417 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.113567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.113686 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.113865 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.114134 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.114150 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-config\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.137828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgq9j\" (UniqueName: \"kubernetes.io/projected/a710016e-8c14-45a3-b4c5-2b11b3fecd2a-kube-api-access-mgq9j\") pod \"dnsmasq-dns-5d75f767dc-fncpn\" (UID: \"a710016e-8c14-45a3-b4c5-2b11b3fecd2a\") " pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.287288 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.300540 4805 generic.go:334] "Generic (PLEG): container finished" podID="a0874a96-7e2d-4cf2-847f-50d9b97704eb" containerID="5570b75c2abbcaab944e1429f72c20e8a04af4c2cb31f4121fd8486bd08bd1c6" exitCode=0 Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.300848 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" event={"ID":"a0874a96-7e2d-4cf2-847f-50d9b97704eb","Type":"ContainerDied","Data":"5570b75c2abbcaab944e1429f72c20e8a04af4c2cb31f4121fd8486bd08bd1c6"} Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.449058 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.525482 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-swift-storage-0\") pod \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.525634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-nb\") pod \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.525709 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-sb\") pod \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.525766 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-config\") pod \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.525844 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-svc\") pod \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.525916 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7wdv\" (UniqueName: \"kubernetes.io/projected/a0874a96-7e2d-4cf2-847f-50d9b97704eb-kube-api-access-d7wdv\") pod \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\" (UID: \"a0874a96-7e2d-4cf2-847f-50d9b97704eb\") " Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.532973 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0874a96-7e2d-4cf2-847f-50d9b97704eb-kube-api-access-d7wdv" (OuterVolumeSpecName: "kube-api-access-d7wdv") pod "a0874a96-7e2d-4cf2-847f-50d9b97704eb" (UID: "a0874a96-7e2d-4cf2-847f-50d9b97704eb"). InnerVolumeSpecName "kube-api-access-d7wdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.590989 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a0874a96-7e2d-4cf2-847f-50d9b97704eb" (UID: "a0874a96-7e2d-4cf2-847f-50d9b97704eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.611113 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a0874a96-7e2d-4cf2-847f-50d9b97704eb" (UID: "a0874a96-7e2d-4cf2-847f-50d9b97704eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.625239 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a0874a96-7e2d-4cf2-847f-50d9b97704eb" (UID: "a0874a96-7e2d-4cf2-847f-50d9b97704eb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.629301 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0874a96-7e2d-4cf2-847f-50d9b97704eb" (UID: "a0874a96-7e2d-4cf2-847f-50d9b97704eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.631090 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-config" (OuterVolumeSpecName: "config") pod "a0874a96-7e2d-4cf2-847f-50d9b97704eb" (UID: "a0874a96-7e2d-4cf2-847f-50d9b97704eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.636848 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.636879 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.636888 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.636898 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.636908 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7wdv\" (UniqueName: \"kubernetes.io/projected/a0874a96-7e2d-4cf2-847f-50d9b97704eb-kube-api-access-d7wdv\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.636920 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0874a96-7e2d-4cf2-847f-50d9b97704eb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:20:57 crc kubenswrapper[4805]: W0216 21:20:57.815173 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda710016e_8c14_45a3_b4c5_2b11b3fecd2a.slice/crio-e5b9476c553e1cde3bbf201599baa22f43ba450f924a62c2a0625ae6946ffc06 WatchSource:0}: Error finding container e5b9476c553e1cde3bbf201599baa22f43ba450f924a62c2a0625ae6946ffc06: Status 404 returned error can't find the container with id e5b9476c553e1cde3bbf201599baa22f43ba450f924a62c2a0625ae6946ffc06 Feb 16 21:20:57 crc kubenswrapper[4805]: I0216 21:20:57.819078 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-fncpn"] Feb 16 21:20:58 crc kubenswrapper[4805]: I0216 21:20:58.311956 4805 generic.go:334] "Generic (PLEG): container finished" podID="a710016e-8c14-45a3-b4c5-2b11b3fecd2a" containerID="92b7bb2021ca4758d3d06a64d80e06e14379828ba65136f06f36c451b246ea41" exitCode=0 Feb 16 21:20:58 crc kubenswrapper[4805]: I0216 21:20:58.312086 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" event={"ID":"a710016e-8c14-45a3-b4c5-2b11b3fecd2a","Type":"ContainerDied","Data":"92b7bb2021ca4758d3d06a64d80e06e14379828ba65136f06f36c451b246ea41"} Feb 16 21:20:58 crc kubenswrapper[4805]: I0216 21:20:58.312479 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" event={"ID":"a710016e-8c14-45a3-b4c5-2b11b3fecd2a","Type":"ContainerStarted","Data":"e5b9476c553e1cde3bbf201599baa22f43ba450f924a62c2a0625ae6946ffc06"} Feb 16 21:20:58 crc kubenswrapper[4805]: I0216 21:20:58.314433 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" event={"ID":"a0874a96-7e2d-4cf2-847f-50d9b97704eb","Type":"ContainerDied","Data":"ddb2611965d3eca6ffaaaa0b164f9b8a1ef7a40128f2ca395a664b14cda0352e"} Feb 16 21:20:58 crc kubenswrapper[4805]: I0216 21:20:58.314487 4805 scope.go:117] "RemoveContainer" containerID="5570b75c2abbcaab944e1429f72c20e8a04af4c2cb31f4121fd8486bd08bd1c6" Feb 16 21:20:58 crc kubenswrapper[4805]: I0216 21:20:58.314614 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-lpsc5" Feb 16 21:20:58 crc kubenswrapper[4805]: I0216 21:20:58.547385 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-lpsc5"] Feb 16 21:20:58 crc kubenswrapper[4805]: I0216 21:20:58.561434 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-lpsc5"] Feb 16 21:20:58 crc kubenswrapper[4805]: I0216 21:20:58.561826 4805 scope.go:117] "RemoveContainer" containerID="cf041677e976787d1719e0a1abded87258aeac9018aab4a10af97559642086d5" Feb 16 21:20:59 crc kubenswrapper[4805]: I0216 21:20:59.325140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" event={"ID":"a710016e-8c14-45a3-b4c5-2b11b3fecd2a","Type":"ContainerStarted","Data":"ef36777727d1d8cb22b67123c5438349a994c9819d8a9df6cefd691c7d9e84e6"} Feb 16 21:20:59 crc kubenswrapper[4805]: I0216 21:20:59.325591 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:20:59 crc kubenswrapper[4805]: I0216 21:20:59.365059 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" podStartSLOduration=3.365037839 podStartE2EDuration="3.365037839s" podCreationTimestamp="2026-02-16 21:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:20:59.353255051 +0000 UTC m=+1477.171938356" watchObservedRunningTime="2026-02-16 21:20:59.365037839 +0000 UTC m=+1477.183721144" Feb 16 21:20:59 crc kubenswrapper[4805]: E0216 21:20:59.599936 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:20:59 crc kubenswrapper[4805]: I0216 21:20:59.615148 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0874a96-7e2d-4cf2-847f-50d9b97704eb" path="/var/lib/kubelet/pods/a0874a96-7e2d-4cf2-847f-50d9b97704eb/volumes" Feb 16 21:20:59 crc kubenswrapper[4805]: I0216 21:20:59.616103 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 21:20:59 crc kubenswrapper[4805]: E0216 21:20:59.709777 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:20:59 crc kubenswrapper[4805]: E0216 21:20:59.709869 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:20:59 crc kubenswrapper[4805]: E0216 21:20:59.710057 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:20:59 crc kubenswrapper[4805]: E0216 21:20:59.711290 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:21:00 crc kubenswrapper[4805]: E0216 21:21:00.345313 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:21:07 crc kubenswrapper[4805]: I0216 21:21:07.289615 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-fncpn" Feb 16 21:21:07 crc kubenswrapper[4805]: I0216 21:21:07.381632 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dwppj"] Feb 16 21:21:07 crc kubenswrapper[4805]: I0216 21:21:07.382218 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" podUID="22a3b250-0093-4f8a-a827-9bf0242af330" containerName="dnsmasq-dns" containerID="cri-o://97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063" gracePeriod=10 Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.028515 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.099359 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.099422 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.099468 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.100400 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.100451 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" gracePeriod=600 Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.204091 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-config\") pod \"22a3b250-0093-4f8a-a827-9bf0242af330\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.204525 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-sb\") pod \"22a3b250-0093-4f8a-a827-9bf0242af330\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.204675 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-openstack-edpm-ipam\") pod \"22a3b250-0093-4f8a-a827-9bf0242af330\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.204795 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdmqf\" (UniqueName: \"kubernetes.io/projected/22a3b250-0093-4f8a-a827-9bf0242af330-kube-api-access-wdmqf\") pod \"22a3b250-0093-4f8a-a827-9bf0242af330\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.204918 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-svc\") pod \"22a3b250-0093-4f8a-a827-9bf0242af330\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.205115 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-swift-storage-0\") pod \"22a3b250-0093-4f8a-a827-9bf0242af330\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.205600 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-nb\") pod \"22a3b250-0093-4f8a-a827-9bf0242af330\" (UID: \"22a3b250-0093-4f8a-a827-9bf0242af330\") " Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.238200 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a3b250-0093-4f8a-a827-9bf0242af330-kube-api-access-wdmqf" (OuterVolumeSpecName: "kube-api-access-wdmqf") pod "22a3b250-0093-4f8a-a827-9bf0242af330" (UID: "22a3b250-0093-4f8a-a827-9bf0242af330"). InnerVolumeSpecName "kube-api-access-wdmqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:21:08 crc kubenswrapper[4805]: E0216 21:21:08.238490 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.280225 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22a3b250-0093-4f8a-a827-9bf0242af330" (UID: "22a3b250-0093-4f8a-a827-9bf0242af330"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.296106 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-config" (OuterVolumeSpecName: "config") pod "22a3b250-0093-4f8a-a827-9bf0242af330" (UID: "22a3b250-0093-4f8a-a827-9bf0242af330"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.301558 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "22a3b250-0093-4f8a-a827-9bf0242af330" (UID: "22a3b250-0093-4f8a-a827-9bf0242af330"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.303197 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22a3b250-0093-4f8a-a827-9bf0242af330" (UID: "22a3b250-0093-4f8a-a827-9bf0242af330"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.304571 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22a3b250-0093-4f8a-a827-9bf0242af330" (UID: "22a3b250-0093-4f8a-a827-9bf0242af330"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.319478 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "22a3b250-0093-4f8a-a827-9bf0242af330" (UID: "22a3b250-0093-4f8a-a827-9bf0242af330"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.323451 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.323484 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.323493 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.323504 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.323515 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdmqf\" (UniqueName: \"kubernetes.io/projected/22a3b250-0093-4f8a-a827-9bf0242af330-kube-api-access-wdmqf\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.323766 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.323776 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22a3b250-0093-4f8a-a827-9bf0242af330-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.537950 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" exitCode=0 Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.538023 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e"} Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.538082 4805 scope.go:117] "RemoveContainer" containerID="3695f3bf70d1d75f31deaf59ecf0f2732a5f8a503501ca8da83dcad9ebd6dcda" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.539112 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:21:08 crc kubenswrapper[4805]: E0216 21:21:08.539474 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.542391 4805 generic.go:334] "Generic (PLEG): container finished" podID="22a3b250-0093-4f8a-a827-9bf0242af330" containerID="97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063" exitCode=0 Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.542425 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" event={"ID":"22a3b250-0093-4f8a-a827-9bf0242af330","Type":"ContainerDied","Data":"97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063"} Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.542446 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" event={"ID":"22a3b250-0093-4f8a-a827-9bf0242af330","Type":"ContainerDied","Data":"c116abb31241b59f2befef178973d8c2a9e7728358c038f83d1321089f260766"} Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.542495 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dwppj" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.668832 4805 scope.go:117] "RemoveContainer" containerID="97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.674273 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dwppj"] Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.702371 4805 scope.go:117] "RemoveContainer" containerID="79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.703584 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dwppj"] Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.726117 4805 scope.go:117] "RemoveContainer" containerID="97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063" Feb 16 21:21:08 crc kubenswrapper[4805]: E0216 21:21:08.729340 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063\": container with ID starting with 97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063 not found: ID does not exist" containerID="97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.729409 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063"} err="failed to get container status \"97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063\": rpc error: code = NotFound desc = could not find container \"97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063\": container with ID starting with 97c38dad549496278a17ecd03204cf316f5f1cb69feda7c2fe367eddc74a8063 not found: ID does not exist" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.729442 4805 scope.go:117] "RemoveContainer" containerID="79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd" Feb 16 21:21:08 crc kubenswrapper[4805]: E0216 21:21:08.731116 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd\": container with ID starting with 79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd not found: ID does not exist" containerID="79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd" Feb 16 21:21:08 crc kubenswrapper[4805]: I0216 21:21:08.731165 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd"} err="failed to get container status \"79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd\": rpc error: code = NotFound desc = could not find container \"79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd\": container with ID starting with 79d465ee717f1c4dd439d76661edae50ba5d4bfa72517878ebf83d83819a56bd not found: ID does not exist" Feb 16 21:21:09 crc kubenswrapper[4805]: I0216 21:21:09.610754 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22a3b250-0093-4f8a-a827-9bf0242af330" path="/var/lib/kubelet/pods/22a3b250-0093-4f8a-a827-9bf0242af330/volumes" Feb 16 21:21:11 crc kubenswrapper[4805]: E0216 21:21:11.787152 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:21:11 crc kubenswrapper[4805]: E0216 21:21:11.787692 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:21:11 crc kubenswrapper[4805]: E0216 21:21:11.787887 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:21:11 crc kubenswrapper[4805]: E0216 21:21:11.789375 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:21:14 crc kubenswrapper[4805]: E0216 21:21:14.601335 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:21:19 crc kubenswrapper[4805]: I0216 21:21:19.598206 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:21:19 crc kubenswrapper[4805]: E0216 21:21:19.599363 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:21:19 crc kubenswrapper[4805]: I0216 21:21:19.713904 4805 generic.go:334] "Generic (PLEG): container finished" podID="57bae43a-529b-4748-8a58-63b1a1c6db10" containerID="1ce1fe8a9cc79929f1c7997a88c4f6d1ee20e5fd59b5d881b1cda83ab0b0c3b4" exitCode=0 Feb 16 21:21:19 crc kubenswrapper[4805]: I0216 21:21:19.713987 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"57bae43a-529b-4748-8a58-63b1a1c6db10","Type":"ContainerDied","Data":"1ce1fe8a9cc79929f1c7997a88c4f6d1ee20e5fd59b5d881b1cda83ab0b0c3b4"} Feb 16 21:21:19 crc kubenswrapper[4805]: I0216 21:21:19.715477 4805 generic.go:334] "Generic (PLEG): container finished" podID="ee307678-615e-4eaf-be4c-6e44e3a31f27" containerID="54f7794c9890e75116dbe5c928f9ce135d8b349c674d5d4e2da3804ef45f443e" exitCode=0 Feb 16 21:21:19 crc kubenswrapper[4805]: I0216 21:21:19.715528 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee307678-615e-4eaf-be4c-6e44e3a31f27","Type":"ContainerDied","Data":"54f7794c9890e75116dbe5c928f9ce135d8b349c674d5d4e2da3804ef45f443e"} Feb 16 21:21:20 crc kubenswrapper[4805]: I0216 21:21:20.731881 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee307678-615e-4eaf-be4c-6e44e3a31f27","Type":"ContainerStarted","Data":"24e1954bd8efeb5922892c4f688d741aa59ba83f90ed90529e5eb536885edc2f"} Feb 16 21:21:20 crc kubenswrapper[4805]: I0216 21:21:20.732492 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:21:20 crc kubenswrapper[4805]: I0216 21:21:20.733973 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"57bae43a-529b-4748-8a58-63b1a1c6db10","Type":"ContainerStarted","Data":"5c5c4afb54ec69fdcd4dc221e05f531d5c93375a89c0de14423b2bbf8da21417"} Feb 16 21:21:20 crc kubenswrapper[4805]: I0216 21:21:20.734816 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 21:21:20 crc kubenswrapper[4805]: I0216 21:21:20.766382 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.766364136 podStartE2EDuration="36.766364136s" podCreationTimestamp="2026-02-16 21:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:21:20.756710375 +0000 UTC m=+1498.575393670" watchObservedRunningTime="2026-02-16 21:21:20.766364136 +0000 UTC m=+1498.585047421" Feb 16 21:21:20 crc kubenswrapper[4805]: I0216 21:21:20.792149 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=36.792131733 podStartE2EDuration="36.792131733s" podCreationTimestamp="2026-02-16 21:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:21:20.784316782 +0000 UTC m=+1498.603000117" watchObservedRunningTime="2026-02-16 21:21:20.792131733 +0000 UTC m=+1498.610815018" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.465577 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd"] Feb 16 21:21:21 crc kubenswrapper[4805]: E0216 21:21:21.466448 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a3b250-0093-4f8a-a827-9bf0242af330" containerName="init" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.466474 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a3b250-0093-4f8a-a827-9bf0242af330" containerName="init" Feb 16 21:21:21 crc kubenswrapper[4805]: E0216 21:21:21.466525 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a3b250-0093-4f8a-a827-9bf0242af330" containerName="dnsmasq-dns" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.466534 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a3b250-0093-4f8a-a827-9bf0242af330" containerName="dnsmasq-dns" Feb 16 21:21:21 crc kubenswrapper[4805]: E0216 21:21:21.466550 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0874a96-7e2d-4cf2-847f-50d9b97704eb" containerName="dnsmasq-dns" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.466559 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0874a96-7e2d-4cf2-847f-50d9b97704eb" containerName="dnsmasq-dns" Feb 16 21:21:21 crc kubenswrapper[4805]: E0216 21:21:21.466573 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0874a96-7e2d-4cf2-847f-50d9b97704eb" containerName="init" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.466579 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0874a96-7e2d-4cf2-847f-50d9b97704eb" containerName="init" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.466818 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0874a96-7e2d-4cf2-847f-50d9b97704eb" containerName="dnsmasq-dns" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.466842 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a3b250-0093-4f8a-a827-9bf0242af330" containerName="dnsmasq-dns" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.467684 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.470568 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.470842 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.471125 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.471322 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.493773 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd"] Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.581514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5trg\" (UniqueName: \"kubernetes.io/projected/8938f803-35ca-4231-81e3-fbf996af4142-kube-api-access-q5trg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.581568 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.581594 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.582104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.684935 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.686174 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5trg\" (UniqueName: \"kubernetes.io/projected/8938f803-35ca-4231-81e3-fbf996af4142-kube-api-access-q5trg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.686302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.686373 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.690183 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.690215 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.701887 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.703291 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5trg\" (UniqueName: \"kubernetes.io/projected/8938f803-35ca-4231-81e3-fbf996af4142-kube-api-access-q5trg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:21 crc kubenswrapper[4805]: I0216 21:21:21.787784 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:22 crc kubenswrapper[4805]: I0216 21:21:22.760905 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd"] Feb 16 21:21:22 crc kubenswrapper[4805]: W0216 21:21:22.765279 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8938f803_35ca_4231_81e3_fbf996af4142.slice/crio-369affe58b6509337b0cc6671b3ecc460af7a637ac4487567d49550f3def2730 WatchSource:0}: Error finding container 369affe58b6509337b0cc6671b3ecc460af7a637ac4487567d49550f3def2730: Status 404 returned error can't find the container with id 369affe58b6509337b0cc6671b3ecc460af7a637ac4487567d49550f3def2730 Feb 16 21:21:23 crc kubenswrapper[4805]: E0216 21:21:23.616989 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:21:23 crc kubenswrapper[4805]: I0216 21:21:23.776240 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" event={"ID":"8938f803-35ca-4231-81e3-fbf996af4142","Type":"ContainerStarted","Data":"369affe58b6509337b0cc6671b3ecc460af7a637ac4487567d49550f3def2730"} Feb 16 21:21:28 crc kubenswrapper[4805]: I0216 21:21:28.727101 4805 scope.go:117] "RemoveContainer" containerID="1af8333abc1338aa367ebc5da7467323ad21953991e3cabf0e3664aaa7126be8" Feb 16 21:21:28 crc kubenswrapper[4805]: E0216 21:21:28.734776 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:21:28 crc kubenswrapper[4805]: E0216 21:21:28.734830 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:21:28 crc kubenswrapper[4805]: E0216 21:21:28.734966 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:21:28 crc kubenswrapper[4805]: E0216 21:21:28.736224 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:21:31 crc kubenswrapper[4805]: I0216 21:21:31.598883 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:21:31 crc kubenswrapper[4805]: E0216 21:21:31.599539 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:21:32 crc kubenswrapper[4805]: I0216 21:21:32.983300 4805 scope.go:117] "RemoveContainer" containerID="d911435d05729cc45fc9c80b1c823d7807379afaf4a6f2bf520d77cca9764278" Feb 16 21:21:33 crc kubenswrapper[4805]: I0216 21:21:33.046056 4805 scope.go:117] "RemoveContainer" containerID="15700f0c7164c46bd7e13ce8916f9f8fb68f1bc807989a780786c510041073b9" Feb 16 21:21:33 crc kubenswrapper[4805]: I0216 21:21:33.078693 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 21:21:33 crc kubenswrapper[4805]: I0216 21:21:33.121297 4805 scope.go:117] "RemoveContainer" containerID="e900fe6df134219de0ae70ee025fa622baf07d7f21b83139a4369c9fd946a11c" Feb 16 21:21:33 crc kubenswrapper[4805]: I0216 21:21:33.237075 4805 scope.go:117] "RemoveContainer" containerID="556959b5d927b0fd5566855488c1a1c2b84dd33bf82f28e866b38c946eb004ca" Feb 16 21:21:34 crc kubenswrapper[4805]: I0216 21:21:34.071410 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" event={"ID":"8938f803-35ca-4231-81e3-fbf996af4142","Type":"ContainerStarted","Data":"30d6262a63727f2e12b43514a453212bf510eb41d7922e7f2c1ed4227a1d48b2"} Feb 16 21:21:34 crc kubenswrapper[4805]: I0216 21:21:34.090160 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" podStartSLOduration=2.782486543 podStartE2EDuration="13.090141309s" podCreationTimestamp="2026-02-16 21:21:21 +0000 UTC" firstStartedPulling="2026-02-16 21:21:22.767912156 +0000 UTC m=+1500.586595451" lastFinishedPulling="2026-02-16 21:21:33.075566932 +0000 UTC m=+1510.894250217" observedRunningTime="2026-02-16 21:21:34.090114548 +0000 UTC m=+1511.908797863" watchObservedRunningTime="2026-02-16 21:21:34.090141309 +0000 UTC m=+1511.908824604" Feb 16 21:21:34 crc kubenswrapper[4805]: I0216 21:21:34.462975 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 21:21:34 crc kubenswrapper[4805]: I0216 21:21:34.531373 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:21:34 crc kubenswrapper[4805]: E0216 21:21:34.614933 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:21:34 crc kubenswrapper[4805]: I0216 21:21:34.627886 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:21:38 crc kubenswrapper[4805]: I0216 21:21:38.979540 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" containerName="rabbitmq" containerID="cri-o://cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1" gracePeriod=604796 Feb 16 21:21:43 crc kubenswrapper[4805]: I0216 21:21:43.610661 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:21:43 crc kubenswrapper[4805]: E0216 21:21:43.611574 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:21:43 crc kubenswrapper[4805]: E0216 21:21:43.612601 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:21:44 crc kubenswrapper[4805]: I0216 21:21:44.205314 4805 generic.go:334] "Generic (PLEG): container finished" podID="8938f803-35ca-4231-81e3-fbf996af4142" containerID="30d6262a63727f2e12b43514a453212bf510eb41d7922e7f2c1ed4227a1d48b2" exitCode=0 Feb 16 21:21:44 crc kubenswrapper[4805]: I0216 21:21:44.205355 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" event={"ID":"8938f803-35ca-4231-81e3-fbf996af4142","Type":"ContainerDied","Data":"30d6262a63727f2e12b43514a453212bf510eb41d7922e7f2c1ed4227a1d48b2"} Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.789686 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.797745 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.901643 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-plugins-conf\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.901706 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-server-conf\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.901892 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-erlang-cookie\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.901951 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-confd\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.902080 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g7hm\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-kube-api-access-7g7hm\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.902134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14fe6c77-adbd-4abe-9aff-7bb72474d47b-erlang-cookie-secret\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.902191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-inventory\") pod \"8938f803-35ca-4231-81e3-fbf996af4142\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.902221 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-config-data\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.902268 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-repo-setup-combined-ca-bundle\") pod \"8938f803-35ca-4231-81e3-fbf996af4142\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.905853 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.905927 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14fe6c77-adbd-4abe-9aff-7bb72474d47b-pod-info\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.906018 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-ssh-key-openstack-edpm-ipam\") pod \"8938f803-35ca-4231-81e3-fbf996af4142\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.906064 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5trg\" (UniqueName: \"kubernetes.io/projected/8938f803-35ca-4231-81e3-fbf996af4142-kube-api-access-q5trg\") pod \"8938f803-35ca-4231-81e3-fbf996af4142\" (UID: \"8938f803-35ca-4231-81e3-fbf996af4142\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.906160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-plugins\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.906214 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-tls\") pod \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\" (UID: \"14fe6c77-adbd-4abe-9aff-7bb72474d47b\") " Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.908883 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.929912 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.930256 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.949056 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-kube-api-access-7g7hm" (OuterVolumeSpecName: "kube-api-access-7g7hm") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "kube-api-access-7g7hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.952927 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8938f803-35ca-4231-81e3-fbf996af4142" (UID: "8938f803-35ca-4231-81e3-fbf996af4142"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.957939 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fe6c77-adbd-4abe-9aff-7bb72474d47b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.978134 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8938f803-35ca-4231-81e3-fbf996af4142-kube-api-access-q5trg" (OuterVolumeSpecName: "kube-api-access-q5trg") pod "8938f803-35ca-4231-81e3-fbf996af4142" (UID: "8938f803-35ca-4231-81e3-fbf996af4142"). InnerVolumeSpecName "kube-api-access-q5trg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.985997 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:21:45 crc kubenswrapper[4805]: I0216 21:21:45.987878 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/14fe6c77-adbd-4abe-9aff-7bb72474d47b-pod-info" (OuterVolumeSpecName: "pod-info") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.011600 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.011634 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g7hm\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-kube-api-access-7g7hm\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.011647 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14fe6c77-adbd-4abe-9aff-7bb72474d47b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.011659 4805 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.011672 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14fe6c77-adbd-4abe-9aff-7bb72474d47b-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.011684 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5trg\" (UniqueName: \"kubernetes.io/projected/8938f803-35ca-4231-81e3-fbf996af4142-kube-api-access-q5trg\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.011696 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.011707 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.011720 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.044294 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-config-data" (OuterVolumeSpecName: "config-data") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.072901 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8938f803-35ca-4231-81e3-fbf996af4142" (UID: "8938f803-35ca-4231-81e3-fbf996af4142"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.092267 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-server-conf" (OuterVolumeSpecName: "server-conf") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.118064 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.118094 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.118103 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14fe6c77-adbd-4abe-9aff-7bb72474d47b-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.128884 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-inventory" (OuterVolumeSpecName: "inventory") pod "8938f803-35ca-4231-81e3-fbf996af4142" (UID: "8938f803-35ca-4231-81e3-fbf996af4142"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.152713 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533" (OuterVolumeSpecName: "persistence") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.210023 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "14fe6c77-adbd-4abe-9aff-7bb72474d47b" (UID: "14fe6c77-adbd-4abe-9aff-7bb72474d47b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.220149 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14fe6c77-adbd-4abe-9aff-7bb72474d47b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.220186 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8938f803-35ca-4231-81e3-fbf996af4142-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.220221 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") on node \"crc\" " Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.228887 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" event={"ID":"8938f803-35ca-4231-81e3-fbf996af4142","Type":"ContainerDied","Data":"369affe58b6509337b0cc6671b3ecc460af7a637ac4487567d49550f3def2730"} Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.228929 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="369affe58b6509337b0cc6671b3ecc460af7a637ac4487567d49550f3def2730" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.228979 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.231963 4805 generic.go:334] "Generic (PLEG): container finished" podID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" containerID="cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1" exitCode=0 Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.232012 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14fe6c77-adbd-4abe-9aff-7bb72474d47b","Type":"ContainerDied","Data":"cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1"} Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.232043 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14fe6c77-adbd-4abe-9aff-7bb72474d47b","Type":"ContainerDied","Data":"9d5c212a85d7beb85f387eb6fd0bd7d9784be2fddc27338346de322c628a1b2a"} Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.232062 4805 scope.go:117] "RemoveContainer" containerID="cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.232216 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.281115 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.281318 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533") on node "crc" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.284363 4805 scope.go:117] "RemoveContainer" containerID="5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.325620 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.341864 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.367679 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.379637 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:21:46 crc kubenswrapper[4805]: E0216 21:21:46.380108 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" containerName="setup-container" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.380126 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" containerName="setup-container" Feb 16 21:21:46 crc kubenswrapper[4805]: E0216 21:21:46.380150 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" containerName="rabbitmq" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.380156 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" containerName="rabbitmq" Feb 16 21:21:46 crc kubenswrapper[4805]: E0216 21:21:46.380175 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8938f803-35ca-4231-81e3-fbf996af4142" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.380182 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8938f803-35ca-4231-81e3-fbf996af4142" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.380392 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" containerName="rabbitmq" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.380408 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8938f803-35ca-4231-81e3-fbf996af4142" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.381499 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.390498 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87"] Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.395786 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.400206 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.400916 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.401091 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.401504 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.404403 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.420351 4805 scope.go:117] "RemoveContainer" containerID="cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1" Feb 16 21:21:46 crc kubenswrapper[4805]: E0216 21:21:46.421356 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1\": container with ID starting with cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1 not found: ID does not exist" containerID="cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.421384 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1"} err="failed to get container status \"cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1\": rpc error: code = NotFound desc = could not find container \"cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1\": container with ID starting with cfdd46ed39f1bb915d0761aee277046f0e4b66b80cf52ddfcebb950b09b9c7a1 not found: ID does not exist" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.421403 4805 scope.go:117] "RemoveContainer" containerID="5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.434416 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87"] Feb 16 21:21:46 crc kubenswrapper[4805]: E0216 21:21:46.437699 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877\": container with ID starting with 5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877 not found: ID does not exist" containerID="5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.437747 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877"} err="failed to get container status \"5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877\": rpc error: code = NotFound desc = could not find container \"5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877\": container with ID starting with 5aeb868d0ab99b341d906c056bae2f86c408f33b5bf9fb0dde1dcb0c56c02877 not found: ID does not exist" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.440593 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bj87\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.440694 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bj87\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.440770 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnnv\" (UniqueName: \"kubernetes.io/projected/d713a0aa-87d9-4550-80a8-9b661ef5c585-kube-api-access-fxnnv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bj87\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: E0216 21:21:46.471360 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14fe6c77_adbd_4abe_9aff_7bb72474d47b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8938f803_35ca_4231_81e3_fbf996af4142.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8938f803_35ca_4231_81e3_fbf996af4142.slice/crio-369affe58b6509337b0cc6671b3ecc460af7a637ac4487567d49550f3def2730\": RecentStats: unable to find data in memory cache]" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.542795 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bj87\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.542871 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnnv\" (UniqueName: \"kubernetes.io/projected/d713a0aa-87d9-4550-80a8-9b661ef5c585-kube-api-access-fxnnv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bj87\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.542915 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d3db43a-846e-4b7b-b5ae-5711dc76477f-config-data\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.542935 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d3db43a-846e-4b7b-b5ae-5711dc76477f-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.542958 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d3db43a-846e-4b7b-b5ae-5711dc76477f-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.543000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d3db43a-846e-4b7b-b5ae-5711dc76477f-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.543087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.543131 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.543218 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xrp\" (UniqueName: \"kubernetes.io/projected/3d3db43a-846e-4b7b-b5ae-5711dc76477f-kube-api-access-v9xrp\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.543366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.543530 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d3db43a-846e-4b7b-b5ae-5711dc76477f-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.543907 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.543942 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bj87\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.543971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.549619 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bj87\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.557928 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnnv\" (UniqueName: \"kubernetes.io/projected/d713a0aa-87d9-4550-80a8-9b661ef5c585-kube-api-access-fxnnv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bj87\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.563152 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bj87\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: E0216 21:21:46.600632 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.630429 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f5ndx"] Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.634246 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.644416 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5ndx"] Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.645889 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d3db43a-846e-4b7b-b5ae-5711dc76477f-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.645966 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.646024 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.646130 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d3db43a-846e-4b7b-b5ae-5711dc76477f-config-data\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.646156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d3db43a-846e-4b7b-b5ae-5711dc76477f-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.646191 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d3db43a-846e-4b7b-b5ae-5711dc76477f-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.646247 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d3db43a-846e-4b7b-b5ae-5711dc76477f-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.646269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.646304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.646338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9xrp\" (UniqueName: \"kubernetes.io/projected/3d3db43a-846e-4b7b-b5ae-5711dc76477f-kube-api-access-v9xrp\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.646370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.647060 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.647138 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d3db43a-846e-4b7b-b5ae-5711dc76477f-config-data\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.647504 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d3db43a-846e-4b7b-b5ae-5711dc76477f-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.647658 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.648158 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d3db43a-846e-4b7b-b5ae-5711dc76477f-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.649380 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.649418 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/914431e2210613cdb42ee45d8789399625ced1e6ffb709fe1b4811c9831d39c2/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.661697 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.662029 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d3db43a-846e-4b7b-b5ae-5711dc76477f-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.662140 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d3db43a-846e-4b7b-b5ae-5711dc76477f-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.664093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d3db43a-846e-4b7b-b5ae-5711dc76477f-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.681928 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9xrp\" (UniqueName: \"kubernetes.io/projected/3d3db43a-846e-4b7b-b5ae-5711dc76477f-kube-api-access-v9xrp\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.755909 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-catalog-content\") pod \"community-operators-f5ndx\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.755947 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.758896 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.759245 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-utilities\") pod \"community-operators-f5ndx\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.760423 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj8m6\" (UniqueName: \"kubernetes.io/projected/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-kube-api-access-sj8m6\") pod \"community-operators-f5ndx\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.765410 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.774690 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.837818 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc4a3eca-671e-49c0-a605-7ae6fd156533\") pod \"rabbitmq-server-1\" (UID: \"3d3db43a-846e-4b7b-b5ae-5711dc76477f\") " pod="openstack/rabbitmq-server-1" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.863975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-catalog-content\") pod \"community-operators-f5ndx\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.864577 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-catalog-content\") pod \"community-operators-f5ndx\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.864590 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-utilities\") pod \"community-operators-f5ndx\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.864968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj8m6\" (UniqueName: \"kubernetes.io/projected/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-kube-api-access-sj8m6\") pod \"community-operators-f5ndx\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.865315 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-utilities\") pod \"community-operators-f5ndx\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:46 crc kubenswrapper[4805]: I0216 21:21:46.887935 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj8m6\" (UniqueName: \"kubernetes.io/projected/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-kube-api-access-sj8m6\") pod \"community-operators-f5ndx\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:47 crc kubenswrapper[4805]: I0216 21:21:47.059550 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:47 crc kubenswrapper[4805]: I0216 21:21:47.311207 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:21:47 crc kubenswrapper[4805]: I0216 21:21:47.484784 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87"] Feb 16 21:21:47 crc kubenswrapper[4805]: I0216 21:21:47.610702 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fe6c77-adbd-4abe-9aff-7bb72474d47b" path="/var/lib/kubelet/pods/14fe6c77-adbd-4abe-9aff-7bb72474d47b/volumes" Feb 16 21:21:47 crc kubenswrapper[4805]: I0216 21:21:47.719277 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5ndx"] Feb 16 21:21:47 crc kubenswrapper[4805]: W0216 21:21:47.725761 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d4bc71e_7653_4fbc_a2f1_d1856c67baa5.slice/crio-97cf6cf1e96b614172383dea673141a966921942eaac105ba6eea031d91604b3 WatchSource:0}: Error finding container 97cf6cf1e96b614172383dea673141a966921942eaac105ba6eea031d91604b3: Status 404 returned error can't find the container with id 97cf6cf1e96b614172383dea673141a966921942eaac105ba6eea031d91604b3 Feb 16 21:21:48 crc kubenswrapper[4805]: I0216 21:21:48.312053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" event={"ID":"d713a0aa-87d9-4550-80a8-9b661ef5c585","Type":"ContainerStarted","Data":"0c6f0c66a4c6d0e7888f9f104973ef0093321eb58d85ac84836aa2f3156d8240"} Feb 16 21:21:48 crc kubenswrapper[4805]: I0216 21:21:48.312680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" event={"ID":"d713a0aa-87d9-4550-80a8-9b661ef5c585","Type":"ContainerStarted","Data":"4c6ed24044fd8884168c875b24e77052e423ead852308e06d4c275d13c366d07"} Feb 16 21:21:48 crc kubenswrapper[4805]: I0216 21:21:48.313954 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3d3db43a-846e-4b7b-b5ae-5711dc76477f","Type":"ContainerStarted","Data":"718af8728c1c7a79fa890f7868a5ece745124b0f6c844488a20a1ec7afdaf26b"} Feb 16 21:21:48 crc kubenswrapper[4805]: I0216 21:21:48.315604 4805 generic.go:334] "Generic (PLEG): container finished" podID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerID="18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae" exitCode=0 Feb 16 21:21:48 crc kubenswrapper[4805]: I0216 21:21:48.315646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5ndx" event={"ID":"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5","Type":"ContainerDied","Data":"18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae"} Feb 16 21:21:48 crc kubenswrapper[4805]: I0216 21:21:48.315671 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5ndx" event={"ID":"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5","Type":"ContainerStarted","Data":"97cf6cf1e96b614172383dea673141a966921942eaac105ba6eea031d91604b3"} Feb 16 21:21:48 crc kubenswrapper[4805]: I0216 21:21:48.332251 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" podStartSLOduration=1.908611192 podStartE2EDuration="2.332234409s" podCreationTimestamp="2026-02-16 21:21:46 +0000 UTC" firstStartedPulling="2026-02-16 21:21:47.511033948 +0000 UTC m=+1525.329717243" lastFinishedPulling="2026-02-16 21:21:47.934657165 +0000 UTC m=+1525.753340460" observedRunningTime="2026-02-16 21:21:48.331466189 +0000 UTC m=+1526.150149494" watchObservedRunningTime="2026-02-16 21:21:48.332234409 +0000 UTC m=+1526.150917704" Feb 16 21:21:49 crc kubenswrapper[4805]: I0216 21:21:49.336695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3d3db43a-846e-4b7b-b5ae-5711dc76477f","Type":"ContainerStarted","Data":"36cc929a92bcfd3b3a1882579c00a87a97a6c6f1a4a6f6c4a2d423219497db00"} Feb 16 21:21:50 crc kubenswrapper[4805]: I0216 21:21:50.358502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5ndx" event={"ID":"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5","Type":"ContainerStarted","Data":"8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f"} Feb 16 21:21:51 crc kubenswrapper[4805]: I0216 21:21:51.375426 4805 generic.go:334] "Generic (PLEG): container finished" podID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerID="8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f" exitCode=0 Feb 16 21:21:51 crc kubenswrapper[4805]: I0216 21:21:51.375837 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5ndx" event={"ID":"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5","Type":"ContainerDied","Data":"8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f"} Feb 16 21:21:51 crc kubenswrapper[4805]: I0216 21:21:51.377990 4805 generic.go:334] "Generic (PLEG): container finished" podID="d713a0aa-87d9-4550-80a8-9b661ef5c585" containerID="0c6f0c66a4c6d0e7888f9f104973ef0093321eb58d85ac84836aa2f3156d8240" exitCode=0 Feb 16 21:21:51 crc kubenswrapper[4805]: I0216 21:21:51.378053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" event={"ID":"d713a0aa-87d9-4550-80a8-9b661ef5c585","Type":"ContainerDied","Data":"0c6f0c66a4c6d0e7888f9f104973ef0093321eb58d85ac84836aa2f3156d8240"} Feb 16 21:21:52 crc kubenswrapper[4805]: I0216 21:21:52.392837 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5ndx" event={"ID":"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5","Type":"ContainerStarted","Data":"cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014"} Feb 16 21:21:52 crc kubenswrapper[4805]: I0216 21:21:52.413829 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f5ndx" podStartSLOduration=2.9380494600000002 podStartE2EDuration="6.413813598s" podCreationTimestamp="2026-02-16 21:21:46 +0000 UTC" firstStartedPulling="2026-02-16 21:21:48.317143771 +0000 UTC m=+1526.135827076" lastFinishedPulling="2026-02-16 21:21:51.792907919 +0000 UTC m=+1529.611591214" observedRunningTime="2026-02-16 21:21:52.411625179 +0000 UTC m=+1530.230308474" watchObservedRunningTime="2026-02-16 21:21:52.413813598 +0000 UTC m=+1530.232496893" Feb 16 21:21:52 crc kubenswrapper[4805]: I0216 21:21:52.960507 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.136778 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-inventory\") pod \"d713a0aa-87d9-4550-80a8-9b661ef5c585\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.136907 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxnnv\" (UniqueName: \"kubernetes.io/projected/d713a0aa-87d9-4550-80a8-9b661ef5c585-kube-api-access-fxnnv\") pod \"d713a0aa-87d9-4550-80a8-9b661ef5c585\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.136953 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-ssh-key-openstack-edpm-ipam\") pod \"d713a0aa-87d9-4550-80a8-9b661ef5c585\" (UID: \"d713a0aa-87d9-4550-80a8-9b661ef5c585\") " Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.150766 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d713a0aa-87d9-4550-80a8-9b661ef5c585-kube-api-access-fxnnv" (OuterVolumeSpecName: "kube-api-access-fxnnv") pod "d713a0aa-87d9-4550-80a8-9b661ef5c585" (UID: "d713a0aa-87d9-4550-80a8-9b661ef5c585"). InnerVolumeSpecName "kube-api-access-fxnnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.184023 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-inventory" (OuterVolumeSpecName: "inventory") pod "d713a0aa-87d9-4550-80a8-9b661ef5c585" (UID: "d713a0aa-87d9-4550-80a8-9b661ef5c585"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.218377 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d713a0aa-87d9-4550-80a8-9b661ef5c585" (UID: "d713a0aa-87d9-4550-80a8-9b661ef5c585"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.239599 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.239636 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxnnv\" (UniqueName: \"kubernetes.io/projected/d713a0aa-87d9-4550-80a8-9b661ef5c585-kube-api-access-fxnnv\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.239647 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d713a0aa-87d9-4550-80a8-9b661ef5c585-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.404479 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" event={"ID":"d713a0aa-87d9-4550-80a8-9b661ef5c585","Type":"ContainerDied","Data":"4c6ed24044fd8884168c875b24e77052e423ead852308e06d4c275d13c366d07"} Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.404520 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c6ed24044fd8884168c875b24e77052e423ead852308e06d4c275d13c366d07" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.404536 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bj87" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.541864 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc"] Feb 16 21:21:53 crc kubenswrapper[4805]: E0216 21:21:53.542303 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d713a0aa-87d9-4550-80a8-9b661ef5c585" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.542320 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d713a0aa-87d9-4550-80a8-9b661ef5c585" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.542580 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d713a0aa-87d9-4550-80a8-9b661ef5c585" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.543434 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.544804 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.545941 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.546497 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.547807 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.560011 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc"] Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.647827 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.648014 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.648086 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nvbb\" (UniqueName: \"kubernetes.io/projected/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-kube-api-access-6nvbb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.648174 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.750227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.750313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nvbb\" (UniqueName: \"kubernetes.io/projected/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-kube-api-access-6nvbb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.750370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.750541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.754178 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.754761 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.755136 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.774480 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nvbb\" (UniqueName: \"kubernetes.io/projected/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-kube-api-access-6nvbb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-224bc\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:53 crc kubenswrapper[4805]: I0216 21:21:53.859853 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:21:54 crc kubenswrapper[4805]: W0216 21:21:54.477183 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90fd8fac_cdc0_402a_bb3d_746e06e28b6a.slice/crio-86114cb4fda6d63f9418515f7580748b001ee69a0ae4f649d27fe06b8265b65d WatchSource:0}: Error finding container 86114cb4fda6d63f9418515f7580748b001ee69a0ae4f649d27fe06b8265b65d: Status 404 returned error can't find the container with id 86114cb4fda6d63f9418515f7580748b001ee69a0ae4f649d27fe06b8265b65d Feb 16 21:21:54 crc kubenswrapper[4805]: I0216 21:21:54.483349 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc"] Feb 16 21:21:55 crc kubenswrapper[4805]: I0216 21:21:55.425436 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" event={"ID":"90fd8fac-cdc0-402a-bb3d-746e06e28b6a","Type":"ContainerStarted","Data":"8b66ab3b2b4e19b0d298c7e82c5859964b50407f4d00922a806793489e926bad"} Feb 16 21:21:55 crc kubenswrapper[4805]: I0216 21:21:55.426032 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" event={"ID":"90fd8fac-cdc0-402a-bb3d-746e06e28b6a","Type":"ContainerStarted","Data":"86114cb4fda6d63f9418515f7580748b001ee69a0ae4f649d27fe06b8265b65d"} Feb 16 21:21:55 crc kubenswrapper[4805]: I0216 21:21:55.446999 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" podStartSLOduration=2.012567135 podStartE2EDuration="2.446976135s" podCreationTimestamp="2026-02-16 21:21:53 +0000 UTC" firstStartedPulling="2026-02-16 21:21:54.480494446 +0000 UTC m=+1532.299177741" lastFinishedPulling="2026-02-16 21:21:54.914903446 +0000 UTC m=+1532.733586741" observedRunningTime="2026-02-16 21:21:55.439121362 +0000 UTC m=+1533.257804667" watchObservedRunningTime="2026-02-16 21:21:55.446976135 +0000 UTC m=+1533.265659430" Feb 16 21:21:57 crc kubenswrapper[4805]: I0216 21:21:57.060055 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:57 crc kubenswrapper[4805]: I0216 21:21:57.060462 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:57 crc kubenswrapper[4805]: I0216 21:21:57.129683 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:57 crc kubenswrapper[4805]: I0216 21:21:57.515917 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:21:57 crc kubenswrapper[4805]: I0216 21:21:57.578124 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5ndx"] Feb 16 21:21:57 crc kubenswrapper[4805]: I0216 21:21:57.598652 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:21:57 crc kubenswrapper[4805]: E0216 21:21:57.599045 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:21:57 crc kubenswrapper[4805]: E0216 21:21:57.599974 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:21:59 crc kubenswrapper[4805]: I0216 21:21:59.482513 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f5ndx" podUID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerName="registry-server" containerID="cri-o://cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014" gracePeriod=2 Feb 16 21:21:59 crc kubenswrapper[4805]: E0216 21:21:59.737914 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:21:59 crc kubenswrapper[4805]: E0216 21:21:59.738271 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:21:59 crc kubenswrapper[4805]: E0216 21:21:59.738418 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:21:59 crc kubenswrapper[4805]: E0216 21:21:59.742133 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.125248 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.203595 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-utilities\") pod \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.203815 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-catalog-content\") pod \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.204168 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj8m6\" (UniqueName: \"kubernetes.io/projected/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-kube-api-access-sj8m6\") pod \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\" (UID: \"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5\") " Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.206078 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-utilities" (OuterVolumeSpecName: "utilities") pod "9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" (UID: "9d4bc71e-7653-4fbc-a2f1-d1856c67baa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.213582 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-kube-api-access-sj8m6" (OuterVolumeSpecName: "kube-api-access-sj8m6") pod "9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" (UID: "9d4bc71e-7653-4fbc-a2f1-d1856c67baa5"). InnerVolumeSpecName "kube-api-access-sj8m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.258149 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" (UID: "9d4bc71e-7653-4fbc-a2f1-d1856c67baa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.306771 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.306799 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj8m6\" (UniqueName: \"kubernetes.io/projected/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-kube-api-access-sj8m6\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.306812 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.496753 4805 generic.go:334] "Generic (PLEG): container finished" podID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerID="cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014" exitCode=0 Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.496806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5ndx" event={"ID":"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5","Type":"ContainerDied","Data":"cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014"} Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.496817 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5ndx" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.496846 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5ndx" event={"ID":"9d4bc71e-7653-4fbc-a2f1-d1856c67baa5","Type":"ContainerDied","Data":"97cf6cf1e96b614172383dea673141a966921942eaac105ba6eea031d91604b3"} Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.496916 4805 scope.go:117] "RemoveContainer" containerID="cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.533020 4805 scope.go:117] "RemoveContainer" containerID="8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.551939 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5ndx"] Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.566498 4805 scope.go:117] "RemoveContainer" containerID="18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.571769 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f5ndx"] Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.641073 4805 scope.go:117] "RemoveContainer" containerID="cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014" Feb 16 21:22:00 crc kubenswrapper[4805]: E0216 21:22:00.641483 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014\": container with ID starting with cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014 not found: ID does not exist" containerID="cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.641514 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014"} err="failed to get container status \"cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014\": rpc error: code = NotFound desc = could not find container \"cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014\": container with ID starting with cf6a4716f7cc8e29918a1102b8b8c1567fe96b0ca6ca67265f4b32c34733f014 not found: ID does not exist" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.641533 4805 scope.go:117] "RemoveContainer" containerID="8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f" Feb 16 21:22:00 crc kubenswrapper[4805]: E0216 21:22:00.641992 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f\": container with ID starting with 8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f not found: ID does not exist" containerID="8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.642022 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f"} err="failed to get container status \"8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f\": rpc error: code = NotFound desc = could not find container \"8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f\": container with ID starting with 8c25e8dc680b5b36cc940e6efdc2352881d07d028082f13a5cbef24152d7528f not found: ID does not exist" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.642037 4805 scope.go:117] "RemoveContainer" containerID="18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae" Feb 16 21:22:00 crc kubenswrapper[4805]: E0216 21:22:00.642442 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae\": container with ID starting with 18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae not found: ID does not exist" containerID="18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae" Feb 16 21:22:00 crc kubenswrapper[4805]: I0216 21:22:00.642484 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae"} err="failed to get container status \"18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae\": rpc error: code = NotFound desc = could not find container \"18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae\": container with ID starting with 18127042d8e09a3ca9febf5726cc54a97ed43fbf7c5ee5f5675322c9f56b66ae not found: ID does not exist" Feb 16 21:22:01 crc kubenswrapper[4805]: I0216 21:22:01.615789 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" path="/var/lib/kubelet/pods/9d4bc71e-7653-4fbc-a2f1-d1856c67baa5/volumes" Feb 16 21:22:11 crc kubenswrapper[4805]: I0216 21:22:11.599020 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:22:11 crc kubenswrapper[4805]: E0216 21:22:11.599851 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:22:12 crc kubenswrapper[4805]: E0216 21:22:12.603248 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:22:12 crc kubenswrapper[4805]: E0216 21:22:12.743583 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:22:12 crc kubenswrapper[4805]: E0216 21:22:12.743656 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:22:12 crc kubenswrapper[4805]: E0216 21:22:12.743859 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:22:12 crc kubenswrapper[4805]: E0216 21:22:12.745032 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.323676 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rkf92"] Feb 16 21:22:16 crc kubenswrapper[4805]: E0216 21:22:16.324854 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerName="extract-utilities" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.324870 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerName="extract-utilities" Feb 16 21:22:16 crc kubenswrapper[4805]: E0216 21:22:16.324918 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerName="registry-server" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.324926 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerName="registry-server" Feb 16 21:22:16 crc kubenswrapper[4805]: E0216 21:22:16.324956 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerName="extract-content" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.324965 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerName="extract-content" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.328439 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4bc71e-7653-4fbc-a2f1-d1856c67baa5" containerName="registry-server" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.330619 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.341491 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkf92"] Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.437286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-catalog-content\") pod \"redhat-marketplace-rkf92\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.437372 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4fc\" (UniqueName: \"kubernetes.io/projected/674170cf-a3b7-4d16-86d0-937e49c8d254-kube-api-access-gn4fc\") pod \"redhat-marketplace-rkf92\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.437492 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-utilities\") pod \"redhat-marketplace-rkf92\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.540345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-catalog-content\") pod \"redhat-marketplace-rkf92\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.540419 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4fc\" (UniqueName: \"kubernetes.io/projected/674170cf-a3b7-4d16-86d0-937e49c8d254-kube-api-access-gn4fc\") pod \"redhat-marketplace-rkf92\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.540638 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-utilities\") pod \"redhat-marketplace-rkf92\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.540879 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-catalog-content\") pod \"redhat-marketplace-rkf92\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.541203 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-utilities\") pod \"redhat-marketplace-rkf92\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.571547 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4fc\" (UniqueName: \"kubernetes.io/projected/674170cf-a3b7-4d16-86d0-937e49c8d254-kube-api-access-gn4fc\") pod \"redhat-marketplace-rkf92\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:16 crc kubenswrapper[4805]: I0216 21:22:16.682842 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:17 crc kubenswrapper[4805]: I0216 21:22:17.190670 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkf92"] Feb 16 21:22:17 crc kubenswrapper[4805]: I0216 21:22:17.747142 4805 generic.go:334] "Generic (PLEG): container finished" podID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerID="e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737" exitCode=0 Feb 16 21:22:17 crc kubenswrapper[4805]: I0216 21:22:17.747414 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkf92" event={"ID":"674170cf-a3b7-4d16-86d0-937e49c8d254","Type":"ContainerDied","Data":"e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737"} Feb 16 21:22:17 crc kubenswrapper[4805]: I0216 21:22:17.747439 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkf92" event={"ID":"674170cf-a3b7-4d16-86d0-937e49c8d254","Type":"ContainerStarted","Data":"022508466f95ee9d63733d1227076ed6beead4661eb625544eda11c599387649"} Feb 16 21:22:18 crc kubenswrapper[4805]: I0216 21:22:18.799041 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkf92" event={"ID":"674170cf-a3b7-4d16-86d0-937e49c8d254","Type":"ContainerStarted","Data":"81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a"} Feb 16 21:22:19 crc kubenswrapper[4805]: I0216 21:22:19.811821 4805 generic.go:334] "Generic (PLEG): container finished" podID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerID="81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a" exitCode=0 Feb 16 21:22:19 crc kubenswrapper[4805]: I0216 21:22:19.812190 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkf92" event={"ID":"674170cf-a3b7-4d16-86d0-937e49c8d254","Type":"ContainerDied","Data":"81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a"} Feb 16 21:22:20 crc kubenswrapper[4805]: I0216 21:22:20.833425 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkf92" event={"ID":"674170cf-a3b7-4d16-86d0-937e49c8d254","Type":"ContainerStarted","Data":"764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8"} Feb 16 21:22:20 crc kubenswrapper[4805]: I0216 21:22:20.863356 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rkf92" podStartSLOduration=2.383891428 podStartE2EDuration="4.863334606s" podCreationTimestamp="2026-02-16 21:22:16 +0000 UTC" firstStartedPulling="2026-02-16 21:22:17.752858812 +0000 UTC m=+1555.571542107" lastFinishedPulling="2026-02-16 21:22:20.23230198 +0000 UTC m=+1558.050985285" observedRunningTime="2026-02-16 21:22:20.859175058 +0000 UTC m=+1558.677858353" watchObservedRunningTime="2026-02-16 21:22:20.863334606 +0000 UTC m=+1558.682017911" Feb 16 21:22:21 crc kubenswrapper[4805]: I0216 21:22:21.845458 4805 generic.go:334] "Generic (PLEG): container finished" podID="3d3db43a-846e-4b7b-b5ae-5711dc76477f" containerID="36cc929a92bcfd3b3a1882579c00a87a97a6c6f1a4a6f6c4a2d423219497db00" exitCode=0 Feb 16 21:22:21 crc kubenswrapper[4805]: I0216 21:22:21.845540 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3d3db43a-846e-4b7b-b5ae-5711dc76477f","Type":"ContainerDied","Data":"36cc929a92bcfd3b3a1882579c00a87a97a6c6f1a4a6f6c4a2d423219497db00"} Feb 16 21:22:22 crc kubenswrapper[4805]: I0216 21:22:22.856419 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3d3db43a-846e-4b7b-b5ae-5711dc76477f","Type":"ContainerStarted","Data":"1c75bfc2606f57de8627871d727b994acf42d6a36232fa804c2f2158c8375a89"} Feb 16 21:22:22 crc kubenswrapper[4805]: I0216 21:22:22.857461 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 21:22:22 crc kubenswrapper[4805]: I0216 21:22:22.887301 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=36.887272913 podStartE2EDuration="36.887272913s" podCreationTimestamp="2026-02-16 21:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:22:22.882806227 +0000 UTC m=+1560.701489562" watchObservedRunningTime="2026-02-16 21:22:22.887272913 +0000 UTC m=+1560.705956208" Feb 16 21:22:24 crc kubenswrapper[4805]: E0216 21:22:24.600328 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:22:25 crc kubenswrapper[4805]: I0216 21:22:25.599320 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:22:25 crc kubenswrapper[4805]: E0216 21:22:25.600046 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:22:25 crc kubenswrapper[4805]: E0216 21:22:25.600470 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:22:26 crc kubenswrapper[4805]: I0216 21:22:26.683011 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:26 crc kubenswrapper[4805]: I0216 21:22:26.683072 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:27 crc kubenswrapper[4805]: I0216 21:22:27.736372 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rkf92" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerName="registry-server" probeResult="failure" output=< Feb 16 21:22:27 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:22:27 crc kubenswrapper[4805]: > Feb 16 21:22:27 crc kubenswrapper[4805]: I0216 21:22:27.842198 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hkzpl"] Feb 16 21:22:27 crc kubenswrapper[4805]: I0216 21:22:27.845467 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:27 crc kubenswrapper[4805]: I0216 21:22:27.854040 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hkzpl"] Feb 16 21:22:27 crc kubenswrapper[4805]: I0216 21:22:27.932206 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-utilities\") pod \"certified-operators-hkzpl\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:27 crc kubenswrapper[4805]: I0216 21:22:27.932829 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrrq\" (UniqueName: \"kubernetes.io/projected/55fe474f-82b2-4b82-8e77-38fe02ed4db9-kube-api-access-tkrrq\") pod \"certified-operators-hkzpl\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:27 crc kubenswrapper[4805]: I0216 21:22:27.932897 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-catalog-content\") pod \"certified-operators-hkzpl\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:28 crc kubenswrapper[4805]: I0216 21:22:28.035293 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-utilities\") pod \"certified-operators-hkzpl\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:28 crc kubenswrapper[4805]: I0216 21:22:28.035450 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrrq\" (UniqueName: \"kubernetes.io/projected/55fe474f-82b2-4b82-8e77-38fe02ed4db9-kube-api-access-tkrrq\") pod \"certified-operators-hkzpl\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:28 crc kubenswrapper[4805]: I0216 21:22:28.035490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-catalog-content\") pod \"certified-operators-hkzpl\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:28 crc kubenswrapper[4805]: I0216 21:22:28.035842 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-utilities\") pod \"certified-operators-hkzpl\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:28 crc kubenswrapper[4805]: I0216 21:22:28.035949 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-catalog-content\") pod \"certified-operators-hkzpl\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:28 crc kubenswrapper[4805]: I0216 21:22:28.059375 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrrq\" (UniqueName: \"kubernetes.io/projected/55fe474f-82b2-4b82-8e77-38fe02ed4db9-kube-api-access-tkrrq\") pod \"certified-operators-hkzpl\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:28 crc kubenswrapper[4805]: I0216 21:22:28.168746 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:28 crc kubenswrapper[4805]: I0216 21:22:28.729658 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hkzpl"] Feb 16 21:22:28 crc kubenswrapper[4805]: I0216 21:22:28.931606 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkzpl" event={"ID":"55fe474f-82b2-4b82-8e77-38fe02ed4db9","Type":"ContainerStarted","Data":"816e48f4841b2f7778b6bc44f05596384622fcb3aa4211692f510039678bb51f"} Feb 16 21:22:29 crc kubenswrapper[4805]: I0216 21:22:29.943301 4805 generic.go:334] "Generic (PLEG): container finished" podID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerID="3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b" exitCode=0 Feb 16 21:22:29 crc kubenswrapper[4805]: I0216 21:22:29.943359 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkzpl" event={"ID":"55fe474f-82b2-4b82-8e77-38fe02ed4db9","Type":"ContainerDied","Data":"3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b"} Feb 16 21:22:32 crc kubenswrapper[4805]: I0216 21:22:32.233337 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkzpl" event={"ID":"55fe474f-82b2-4b82-8e77-38fe02ed4db9","Type":"ContainerStarted","Data":"8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9"} Feb 16 21:22:33 crc kubenswrapper[4805]: I0216 21:22:33.245284 4805 generic.go:334] "Generic (PLEG): container finished" podID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerID="8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9" exitCode=0 Feb 16 21:22:33 crc kubenswrapper[4805]: I0216 21:22:33.245378 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkzpl" event={"ID":"55fe474f-82b2-4b82-8e77-38fe02ed4db9","Type":"ContainerDied","Data":"8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9"} Feb 16 21:22:33 crc kubenswrapper[4805]: I0216 21:22:33.508631 4805 scope.go:117] "RemoveContainer" containerID="f1067e7a3ce96adf6307131efa852971e92c89e36ce22c0b1f103b0aa0ef5941" Feb 16 21:22:33 crc kubenswrapper[4805]: I0216 21:22:33.540329 4805 scope.go:117] "RemoveContainer" containerID="dc891749cb57102c67cc7d93a34901a0e19601c31b5532b5a55179aae3c41186" Feb 16 21:22:33 crc kubenswrapper[4805]: I0216 21:22:33.571637 4805 scope.go:117] "RemoveContainer" containerID="a5624c98f24b25798eff966ebf252cc87d1fd5df04a3d9250be3e0700b32bd41" Feb 16 21:22:34 crc kubenswrapper[4805]: I0216 21:22:34.263133 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkzpl" event={"ID":"55fe474f-82b2-4b82-8e77-38fe02ed4db9","Type":"ContainerStarted","Data":"182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381"} Feb 16 21:22:34 crc kubenswrapper[4805]: I0216 21:22:34.290252 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hkzpl" podStartSLOduration=3.52029862 podStartE2EDuration="7.290230649s" podCreationTimestamp="2026-02-16 21:22:27 +0000 UTC" firstStartedPulling="2026-02-16 21:22:29.946449246 +0000 UTC m=+1567.765132541" lastFinishedPulling="2026-02-16 21:22:33.716381275 +0000 UTC m=+1571.535064570" observedRunningTime="2026-02-16 21:22:34.283376681 +0000 UTC m=+1572.102060016" watchObservedRunningTime="2026-02-16 21:22:34.290230649 +0000 UTC m=+1572.108913954" Feb 16 21:22:36 crc kubenswrapper[4805]: I0216 21:22:36.742253 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:36 crc kubenswrapper[4805]: I0216 21:22:36.773637 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 21:22:36 crc kubenswrapper[4805]: I0216 21:22:36.823131 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:36 crc kubenswrapper[4805]: I0216 21:22:36.856253 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:22:36 crc kubenswrapper[4805]: I0216 21:22:36.995566 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkf92"] Feb 16 21:22:37 crc kubenswrapper[4805]: I0216 21:22:37.598644 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:22:37 crc kubenswrapper[4805]: E0216 21:22:37.598956 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:22:37 crc kubenswrapper[4805]: E0216 21:22:37.600043 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:22:38 crc kubenswrapper[4805]: I0216 21:22:38.169898 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:38 crc kubenswrapper[4805]: I0216 21:22:38.169938 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:38 crc kubenswrapper[4805]: I0216 21:22:38.223497 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:38 crc kubenswrapper[4805]: I0216 21:22:38.309136 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rkf92" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerName="registry-server" containerID="cri-o://764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8" gracePeriod=2 Feb 16 21:22:38 crc kubenswrapper[4805]: I0216 21:22:38.362971 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:38 crc kubenswrapper[4805]: E0216 21:22:38.601045 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:22:38 crc kubenswrapper[4805]: I0216 21:22:38.887086 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.074992 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-utilities\") pod \"674170cf-a3b7-4d16-86d0-937e49c8d254\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.075315 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn4fc\" (UniqueName: \"kubernetes.io/projected/674170cf-a3b7-4d16-86d0-937e49c8d254-kube-api-access-gn4fc\") pod \"674170cf-a3b7-4d16-86d0-937e49c8d254\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.075401 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-catalog-content\") pod \"674170cf-a3b7-4d16-86d0-937e49c8d254\" (UID: \"674170cf-a3b7-4d16-86d0-937e49c8d254\") " Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.075866 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-utilities" (OuterVolumeSpecName: "utilities") pod "674170cf-a3b7-4d16-86d0-937e49c8d254" (UID: "674170cf-a3b7-4d16-86d0-937e49c8d254"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.076201 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.096007 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674170cf-a3b7-4d16-86d0-937e49c8d254-kube-api-access-gn4fc" (OuterVolumeSpecName: "kube-api-access-gn4fc") pod "674170cf-a3b7-4d16-86d0-937e49c8d254" (UID: "674170cf-a3b7-4d16-86d0-937e49c8d254"). InnerVolumeSpecName "kube-api-access-gn4fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.128643 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "674170cf-a3b7-4d16-86d0-937e49c8d254" (UID: "674170cf-a3b7-4d16-86d0-937e49c8d254"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.178070 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn4fc\" (UniqueName: \"kubernetes.io/projected/674170cf-a3b7-4d16-86d0-937e49c8d254-kube-api-access-gn4fc\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.178109 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674170cf-a3b7-4d16-86d0-937e49c8d254-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.340106 4805 generic.go:334] "Generic (PLEG): container finished" podID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerID="764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8" exitCode=0 Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.340159 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkf92" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.340181 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkf92" event={"ID":"674170cf-a3b7-4d16-86d0-937e49c8d254","Type":"ContainerDied","Data":"764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8"} Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.340946 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkf92" event={"ID":"674170cf-a3b7-4d16-86d0-937e49c8d254","Type":"ContainerDied","Data":"022508466f95ee9d63733d1227076ed6beead4661eb625544eda11c599387649"} Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.340970 4805 scope.go:117] "RemoveContainer" containerID="764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.379547 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkf92"] Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.385089 4805 scope.go:117] "RemoveContainer" containerID="81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.394180 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkf92"] Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.411462 4805 scope.go:117] "RemoveContainer" containerID="e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.480334 4805 scope.go:117] "RemoveContainer" containerID="764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8" Feb 16 21:22:39 crc kubenswrapper[4805]: E0216 21:22:39.480749 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8\": container with ID starting with 764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8 not found: ID does not exist" containerID="764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.480804 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8"} err="failed to get container status \"764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8\": rpc error: code = NotFound desc = could not find container \"764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8\": container with ID starting with 764f810c158eaef75e2dd425b689d0d7b080384703b6def0711cceea0df59cc8 not found: ID does not exist" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.480835 4805 scope.go:117] "RemoveContainer" containerID="81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a" Feb 16 21:22:39 crc kubenswrapper[4805]: E0216 21:22:39.481122 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a\": container with ID starting with 81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a not found: ID does not exist" containerID="81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.481159 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a"} err="failed to get container status \"81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a\": rpc error: code = NotFound desc = could not find container \"81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a\": container with ID starting with 81fdd604a63b9e8bbe6a9d847db32d3436d672e449a4d72594c600325644221a not found: ID does not exist" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.481185 4805 scope.go:117] "RemoveContainer" containerID="e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737" Feb 16 21:22:39 crc kubenswrapper[4805]: E0216 21:22:39.481384 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737\": container with ID starting with e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737 not found: ID does not exist" containerID="e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.481406 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737"} err="failed to get container status \"e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737\": rpc error: code = NotFound desc = could not find container \"e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737\": container with ID starting with e5a0c830657a3e280aa80dfa4358dbb05047e62b125c9e817aee651d43b94737 not found: ID does not exist" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.615171 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" path="/var/lib/kubelet/pods/674170cf-a3b7-4d16-86d0-937e49c8d254/volumes" Feb 16 21:22:39 crc kubenswrapper[4805]: I0216 21:22:39.788022 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hkzpl"] Feb 16 21:22:40 crc kubenswrapper[4805]: I0216 21:22:40.352245 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hkzpl" podUID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerName="registry-server" containerID="cri-o://182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381" gracePeriod=2 Feb 16 21:22:40 crc kubenswrapper[4805]: I0216 21:22:40.977065 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.121737 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-utilities\") pod \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.122082 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-catalog-content\") pod \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.122134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkrrq\" (UniqueName: \"kubernetes.io/projected/55fe474f-82b2-4b82-8e77-38fe02ed4db9-kube-api-access-tkrrq\") pod \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\" (UID: \"55fe474f-82b2-4b82-8e77-38fe02ed4db9\") " Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.122804 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-utilities" (OuterVolumeSpecName: "utilities") pod "55fe474f-82b2-4b82-8e77-38fe02ed4db9" (UID: "55fe474f-82b2-4b82-8e77-38fe02ed4db9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.128918 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55fe474f-82b2-4b82-8e77-38fe02ed4db9-kube-api-access-tkrrq" (OuterVolumeSpecName: "kube-api-access-tkrrq") pod "55fe474f-82b2-4b82-8e77-38fe02ed4db9" (UID: "55fe474f-82b2-4b82-8e77-38fe02ed4db9"). InnerVolumeSpecName "kube-api-access-tkrrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.179613 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55fe474f-82b2-4b82-8e77-38fe02ed4db9" (UID: "55fe474f-82b2-4b82-8e77-38fe02ed4db9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.224685 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.224766 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe474f-82b2-4b82-8e77-38fe02ed4db9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.224778 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkrrq\" (UniqueName: \"kubernetes.io/projected/55fe474f-82b2-4b82-8e77-38fe02ed4db9-kube-api-access-tkrrq\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.366492 4805 generic.go:334] "Generic (PLEG): container finished" podID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerID="182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381" exitCode=0 Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.366547 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkzpl" event={"ID":"55fe474f-82b2-4b82-8e77-38fe02ed4db9","Type":"ContainerDied","Data":"182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381"} Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.366577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkzpl" event={"ID":"55fe474f-82b2-4b82-8e77-38fe02ed4db9","Type":"ContainerDied","Data":"816e48f4841b2f7778b6bc44f05596384622fcb3aa4211692f510039678bb51f"} Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.366597 4805 scope.go:117] "RemoveContainer" containerID="182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.366754 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hkzpl" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.430619 4805 scope.go:117] "RemoveContainer" containerID="8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.442507 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hkzpl"] Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.452475 4805 scope.go:117] "RemoveContainer" containerID="3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.456377 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hkzpl"] Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.527074 4805 scope.go:117] "RemoveContainer" containerID="182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381" Feb 16 21:22:41 crc kubenswrapper[4805]: E0216 21:22:41.527599 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381\": container with ID starting with 182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381 not found: ID does not exist" containerID="182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.527653 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381"} err="failed to get container status \"182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381\": rpc error: code = NotFound desc = could not find container \"182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381\": container with ID starting with 182d0e5d4298bebda801a44716edac54457b32269d2b0e19c7b6af35a64bb381 not found: ID does not exist" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.527876 4805 scope.go:117] "RemoveContainer" containerID="8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9" Feb 16 21:22:41 crc kubenswrapper[4805]: E0216 21:22:41.528206 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9\": container with ID starting with 8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9 not found: ID does not exist" containerID="8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.528336 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9"} err="failed to get container status \"8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9\": rpc error: code = NotFound desc = could not find container \"8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9\": container with ID starting with 8630325c041284988ff1074b2cb0f7e16058b1130620311bab307940cb93cad9 not found: ID does not exist" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.528426 4805 scope.go:117] "RemoveContainer" containerID="3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b" Feb 16 21:22:41 crc kubenswrapper[4805]: E0216 21:22:41.528844 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b\": container with ID starting with 3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b not found: ID does not exist" containerID="3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.528893 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b"} err="failed to get container status \"3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b\": rpc error: code = NotFound desc = could not find container \"3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b\": container with ID starting with 3e5cc0aa6ca7d9745183aa3aec08345eaa41722596f46d8346c59c8fff28e01b not found: ID does not exist" Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.574395 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8a48053f-4668-43af-bda4-7af014d6457d" containerName="rabbitmq" containerID="cri-o://b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5" gracePeriod=604796 Feb 16 21:22:41 crc kubenswrapper[4805]: I0216 21:22:41.614835 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" path="/var/lib/kubelet/pods/55fe474f-82b2-4b82-8e77-38fe02ed4db9/volumes" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.317447 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.413126 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a48053f-4668-43af-bda4-7af014d6457d-pod-info\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.413191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-erlang-cookie\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.413241 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-server-conf\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.413424 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qwrv\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-kube-api-access-6qwrv\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.413460 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-config-data\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.413493 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-tls\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.414236 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.414263 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a48053f-4668-43af-bda4-7af014d6457d-erlang-cookie-secret\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.414293 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-confd\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.414311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-plugins\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.414333 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-plugins-conf\") pod \"8a48053f-4668-43af-bda4-7af014d6457d\" (UID: \"8a48053f-4668-43af-bda4-7af014d6457d\") " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.414329 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.415064 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.416902 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.420382 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8a48053f-4668-43af-bda4-7af014d6457d-pod-info" (OuterVolumeSpecName: "pod-info") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.420450 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.426928 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-kube-api-access-6qwrv" (OuterVolumeSpecName: "kube-api-access-6qwrv") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "kube-api-access-6qwrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.430010 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a48053f-4668-43af-bda4-7af014d6457d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.435487 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.454412 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d" (OuterVolumeSpecName: "persistence") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "pvc-7291f76f-5384-4af6-88f0-e041026cae5d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.473951 4805 generic.go:334] "Generic (PLEG): container finished" podID="8a48053f-4668-43af-bda4-7af014d6457d" containerID="b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5" exitCode=0 Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.474006 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a48053f-4668-43af-bda4-7af014d6457d","Type":"ContainerDied","Data":"b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5"} Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.474679 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.474918 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a48053f-4668-43af-bda4-7af014d6457d","Type":"ContainerDied","Data":"4bc327eb457e5f09d4cda6f26c72c097c9d7092a8c253a36de3c4e37d8afc9fa"} Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.474976 4805 scope.go:117] "RemoveContainer" containerID="b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.483767 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-config-data" (OuterVolumeSpecName: "config-data") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.518664 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qwrv\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-kube-api-access-6qwrv\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.518694 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.518708 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.518750 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") on node \"crc\" " Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.518765 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a48053f-4668-43af-bda4-7af014d6457d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.518777 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.518788 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.518796 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a48053f-4668-43af-bda4-7af014d6457d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.532135 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-server-conf" (OuterVolumeSpecName: "server-conf") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.549638 4805 scope.go:117] "RemoveContainer" containerID="bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.552022 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.552155 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7291f76f-5384-4af6-88f0-e041026cae5d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d") on node "crc" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.571128 4805 scope.go:117] "RemoveContainer" containerID="b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5" Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.571518 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5\": container with ID starting with b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5 not found: ID does not exist" containerID="b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.571546 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5"} err="failed to get container status \"b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5\": rpc error: code = NotFound desc = could not find container \"b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5\": container with ID starting with b032436ba8a0c8812f3053d2715d599a99aec96c3b49e4942f60d1cf4d0e09d5 not found: ID does not exist" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.571568 4805 scope.go:117] "RemoveContainer" containerID="bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d" Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.571743 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d\": container with ID starting with bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d not found: ID does not exist" containerID="bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.571767 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d"} err="failed to get container status \"bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d\": rpc error: code = NotFound desc = could not find container \"bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d\": container with ID starting with bff5086cc8f56efde2dd47b8f36da25d19233f3c1cfb9cfc8bc9be31966f1f9d not found: ID does not exist" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.574124 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8a48053f-4668-43af-bda4-7af014d6457d" (UID: "8a48053f-4668-43af-bda4-7af014d6457d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.620933 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.620969 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a48053f-4668-43af-bda4-7af014d6457d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.620981 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a48053f-4668-43af-bda4-7af014d6457d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.848116 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.868979 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.902926 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.903540 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a48053f-4668-43af-bda4-7af014d6457d" containerName="rabbitmq" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.903564 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a48053f-4668-43af-bda4-7af014d6457d" containerName="rabbitmq" Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.903574 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a48053f-4668-43af-bda4-7af014d6457d" containerName="setup-container" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.903581 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a48053f-4668-43af-bda4-7af014d6457d" containerName="setup-container" Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.903590 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerName="registry-server" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.903597 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerName="registry-server" Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.903626 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerName="extract-content" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.903633 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerName="extract-content" Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.903657 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerName="registry-server" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.903667 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerName="registry-server" Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.903691 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerName="extract-utilities" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.903699 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerName="extract-utilities" Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.903743 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerName="extract-utilities" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.903751 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerName="extract-utilities" Feb 16 21:22:48 crc kubenswrapper[4805]: E0216 21:22:48.903763 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerName="extract-content" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.903770 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerName="extract-content" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.904085 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a48053f-4668-43af-bda4-7af014d6457d" containerName="rabbitmq" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.904112 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="674170cf-a3b7-4d16-86d0-937e49c8d254" containerName="registry-server" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.904136 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="55fe474f-82b2-4b82-8e77-38fe02ed4db9" containerName="registry-server" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.905495 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:22:48 crc kubenswrapper[4805]: I0216 21:22:48.921960 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035416 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035486 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46463b23-6dbc-4d91-8942-687596251b5b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035538 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035562 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46463b23-6dbc-4d91-8942-687596251b5b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035600 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9g9\" (UniqueName: \"kubernetes.io/projected/46463b23-6dbc-4d91-8942-687596251b5b-kube-api-access-ll9g9\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035628 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46463b23-6dbc-4d91-8942-687596251b5b-config-data\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46463b23-6dbc-4d91-8942-687596251b5b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035755 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035770 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46463b23-6dbc-4d91-8942-687596251b5b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.035802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.137590 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.137648 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.137670 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46463b23-6dbc-4d91-8942-687596251b5b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.137694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll9g9\" (UniqueName: \"kubernetes.io/projected/46463b23-6dbc-4d91-8942-687596251b5b-kube-api-access-ll9g9\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.137762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46463b23-6dbc-4d91-8942-687596251b5b-config-data\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.137878 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46463b23-6dbc-4d91-8942-687596251b5b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.138075 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.138105 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46463b23-6dbc-4d91-8942-687596251b5b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.138485 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.138213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.138827 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.138890 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46463b23-6dbc-4d91-8942-687596251b5b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.138912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46463b23-6dbc-4d91-8942-687596251b5b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.138998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.139168 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46463b23-6dbc-4d91-8942-687596251b5b-config-data\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.140048 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46463b23-6dbc-4d91-8942-687596251b5b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.143166 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46463b23-6dbc-4d91-8942-687596251b5b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.143309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46463b23-6dbc-4d91-8942-687596251b5b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.143756 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.144681 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46463b23-6dbc-4d91-8942-687596251b5b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.146072 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.146131 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a0b12b61bc88df910923ee6bb97fdac79f3a6f1e948ce57348fec3710da23f47/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.162143 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll9g9\" (UniqueName: \"kubernetes.io/projected/46463b23-6dbc-4d91-8942-687596251b5b-kube-api-access-ll9g9\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.213602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7291f76f-5384-4af6-88f0-e041026cae5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7291f76f-5384-4af6-88f0-e041026cae5d\") pod \"rabbitmq-server-0\" (UID: \"46463b23-6dbc-4d91-8942-687596251b5b\") " pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.240233 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.617800 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a48053f-4668-43af-bda4-7af014d6457d" path="/var/lib/kubelet/pods/8a48053f-4668-43af-bda4-7af014d6457d/volumes" Feb 16 21:22:49 crc kubenswrapper[4805]: I0216 21:22:49.772992 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:22:49 crc kubenswrapper[4805]: W0216 21:22:49.776938 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46463b23_6dbc_4d91_8942_687596251b5b.slice/crio-ad0436a85d0f71efdee92c78a013f0a5399974a461a773ecebf5f2c9d6dd9249 WatchSource:0}: Error finding container ad0436a85d0f71efdee92c78a013f0a5399974a461a773ecebf5f2c9d6dd9249: Status 404 returned error can't find the container with id ad0436a85d0f71efdee92c78a013f0a5399974a461a773ecebf5f2c9d6dd9249 Feb 16 21:22:50 crc kubenswrapper[4805]: I0216 21:22:50.499774 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46463b23-6dbc-4d91-8942-687596251b5b","Type":"ContainerStarted","Data":"ad0436a85d0f71efdee92c78a013f0a5399974a461a773ecebf5f2c9d6dd9249"} Feb 16 21:22:51 crc kubenswrapper[4805]: I0216 21:22:51.598179 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:22:51 crc kubenswrapper[4805]: E0216 21:22:51.598805 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:22:52 crc kubenswrapper[4805]: I0216 21:22:52.554155 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46463b23-6dbc-4d91-8942-687596251b5b","Type":"ContainerStarted","Data":"b919c0fd7e960787b44f4ff5932aefd2eff5a6658793fffc081f13290b3dcc15"} Feb 16 21:22:52 crc kubenswrapper[4805]: E0216 21:22:52.600149 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:22:52 crc kubenswrapper[4805]: E0216 21:22:52.600417 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:23:03 crc kubenswrapper[4805]: I0216 21:23:03.606691 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:23:03 crc kubenswrapper[4805]: E0216 21:23:03.607539 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:23:03 crc kubenswrapper[4805]: E0216 21:23:03.609623 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:23:06 crc kubenswrapper[4805]: E0216 21:23:06.601302 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:23:15 crc kubenswrapper[4805]: I0216 21:23:15.598840 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:23:15 crc kubenswrapper[4805]: E0216 21:23:15.599996 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:23:16 crc kubenswrapper[4805]: E0216 21:23:16.606496 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:23:20 crc kubenswrapper[4805]: E0216 21:23:20.724709 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:23:20 crc kubenswrapper[4805]: E0216 21:23:20.725456 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:23:20 crc kubenswrapper[4805]: E0216 21:23:20.725618 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:23:20 crc kubenswrapper[4805]: E0216 21:23:20.728859 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:23:24 crc kubenswrapper[4805]: I0216 21:23:24.996940 4805 generic.go:334] "Generic (PLEG): container finished" podID="46463b23-6dbc-4d91-8942-687596251b5b" containerID="b919c0fd7e960787b44f4ff5932aefd2eff5a6658793fffc081f13290b3dcc15" exitCode=0 Feb 16 21:23:24 crc kubenswrapper[4805]: I0216 21:23:24.996998 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46463b23-6dbc-4d91-8942-687596251b5b","Type":"ContainerDied","Data":"b919c0fd7e960787b44f4ff5932aefd2eff5a6658793fffc081f13290b3dcc15"} Feb 16 21:23:26 crc kubenswrapper[4805]: I0216 21:23:26.014733 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46463b23-6dbc-4d91-8942-687596251b5b","Type":"ContainerStarted","Data":"42eabeba7dbaf3b40ad8e4bfbc5b19c6e166b28e6cb007c1557ab0702bb70ee7"} Feb 16 21:23:26 crc kubenswrapper[4805]: I0216 21:23:26.015485 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 21:23:26 crc kubenswrapper[4805]: I0216 21:23:26.055921 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.055901098 podStartE2EDuration="38.055901098s" podCreationTimestamp="2026-02-16 21:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:23:26.044400239 +0000 UTC m=+1623.863083624" watchObservedRunningTime="2026-02-16 21:23:26.055901098 +0000 UTC m=+1623.874584393" Feb 16 21:23:26 crc kubenswrapper[4805]: I0216 21:23:26.598924 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:23:26 crc kubenswrapper[4805]: E0216 21:23:26.599591 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:23:30 crc kubenswrapper[4805]: E0216 21:23:30.600456 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:23:33 crc kubenswrapper[4805]: I0216 21:23:33.730438 4805 scope.go:117] "RemoveContainer" containerID="1dc8ebced911595effdfb46769b8f5a1816b37d485173fe40b8a24be6cdc4f14" Feb 16 21:23:35 crc kubenswrapper[4805]: E0216 21:23:35.601360 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:23:38 crc kubenswrapper[4805]: I0216 21:23:38.599339 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:23:38 crc kubenswrapper[4805]: E0216 21:23:38.600389 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:23:39 crc kubenswrapper[4805]: I0216 21:23:39.244971 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 21:23:41 crc kubenswrapper[4805]: E0216 21:23:41.728665 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:23:41 crc kubenswrapper[4805]: E0216 21:23:41.729235 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:23:41 crc kubenswrapper[4805]: E0216 21:23:41.729369 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:23:41 crc kubenswrapper[4805]: E0216 21:23:41.730609 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:23:46 crc kubenswrapper[4805]: E0216 21:23:46.600570 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:23:52 crc kubenswrapper[4805]: I0216 21:23:52.605305 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:23:52 crc kubenswrapper[4805]: E0216 21:23:52.606438 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:23:54 crc kubenswrapper[4805]: E0216 21:23:54.604355 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:24:00 crc kubenswrapper[4805]: E0216 21:24:00.609806 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:24:05 crc kubenswrapper[4805]: I0216 21:24:05.598752 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:24:05 crc kubenswrapper[4805]: E0216 21:24:05.600156 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:24:06 crc kubenswrapper[4805]: E0216 21:24:06.600929 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:24:13 crc kubenswrapper[4805]: E0216 21:24:13.617967 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:24:16 crc kubenswrapper[4805]: I0216 21:24:16.597850 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:24:16 crc kubenswrapper[4805]: E0216 21:24:16.598628 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:24:19 crc kubenswrapper[4805]: E0216 21:24:19.601544 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:24:26 crc kubenswrapper[4805]: E0216 21:24:26.603346 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:24:27 crc kubenswrapper[4805]: I0216 21:24:27.598384 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:24:27 crc kubenswrapper[4805]: E0216 21:24:27.599223 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:24:33 crc kubenswrapper[4805]: I0216 21:24:33.905078 4805 scope.go:117] "RemoveContainer" containerID="4970e47908ff879e23efb54093422d058edcf556b50c85fc90f506c254ad5838" Feb 16 21:24:33 crc kubenswrapper[4805]: I0216 21:24:33.956207 4805 scope.go:117] "RemoveContainer" containerID="3267befb626341ec1e07249560475c674806caff52d8f8836cc5ffe148d0e403" Feb 16 21:24:34 crc kubenswrapper[4805]: I0216 21:24:34.014367 4805 scope.go:117] "RemoveContainer" containerID="4052690266f057d6cdf2956d14aa0bf7a4e339bae7030ee61237210f5d00874b" Feb 16 21:24:34 crc kubenswrapper[4805]: I0216 21:24:34.046919 4805 scope.go:117] "RemoveContainer" containerID="0d325a4cf9225ea9ace7415998a87c0f80ba64f569223ee4855baed3ac3d4608" Feb 16 21:24:34 crc kubenswrapper[4805]: I0216 21:24:34.071080 4805 scope.go:117] "RemoveContainer" containerID="6278f140e04d5ac7be41f220b3a50ae372905e7f56a8b3e2bfa0d1c5614d57e7" Feb 16 21:24:34 crc kubenswrapper[4805]: E0216 21:24:34.602032 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:24:40 crc kubenswrapper[4805]: E0216 21:24:40.599711 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:24:42 crc kubenswrapper[4805]: I0216 21:24:42.598597 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:24:42 crc kubenswrapper[4805]: E0216 21:24:42.600905 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:24:47 crc kubenswrapper[4805]: E0216 21:24:47.600791 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:24:53 crc kubenswrapper[4805]: I0216 21:24:53.178782 4805 generic.go:334] "Generic (PLEG): container finished" podID="90fd8fac-cdc0-402a-bb3d-746e06e28b6a" containerID="8b66ab3b2b4e19b0d298c7e82c5859964b50407f4d00922a806793489e926bad" exitCode=0 Feb 16 21:24:53 crc kubenswrapper[4805]: I0216 21:24:53.178849 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" event={"ID":"90fd8fac-cdc0-402a-bb3d-746e06e28b6a","Type":"ContainerDied","Data":"8b66ab3b2b4e19b0d298c7e82c5859964b50407f4d00922a806793489e926bad"} Feb 16 21:24:53 crc kubenswrapper[4805]: I0216 21:24:53.615674 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:24:53 crc kubenswrapper[4805]: E0216 21:24:53.616191 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.725604 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.812496 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nvbb\" (UniqueName: \"kubernetes.io/projected/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-kube-api-access-6nvbb\") pod \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.812676 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-ssh-key-openstack-edpm-ipam\") pod \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.812740 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-bootstrap-combined-ca-bundle\") pod \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.812817 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-inventory\") pod \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\" (UID: \"90fd8fac-cdc0-402a-bb3d-746e06e28b6a\") " Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.818052 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "90fd8fac-cdc0-402a-bb3d-746e06e28b6a" (UID: "90fd8fac-cdc0-402a-bb3d-746e06e28b6a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.823743 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-kube-api-access-6nvbb" (OuterVolumeSpecName: "kube-api-access-6nvbb") pod "90fd8fac-cdc0-402a-bb3d-746e06e28b6a" (UID: "90fd8fac-cdc0-402a-bb3d-746e06e28b6a"). InnerVolumeSpecName "kube-api-access-6nvbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.852392 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90fd8fac-cdc0-402a-bb3d-746e06e28b6a" (UID: "90fd8fac-cdc0-402a-bb3d-746e06e28b6a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.863683 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-inventory" (OuterVolumeSpecName: "inventory") pod "90fd8fac-cdc0-402a-bb3d-746e06e28b6a" (UID: "90fd8fac-cdc0-402a-bb3d-746e06e28b6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.916300 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.916329 4805 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.916340 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 21:24:54 crc kubenswrapper[4805]: I0216 21:24:54.916351 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nvbb\" (UniqueName: \"kubernetes.io/projected/90fd8fac-cdc0-402a-bb3d-746e06e28b6a-kube-api-access-6nvbb\") on node \"crc\" DevicePath \"\"" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.209359 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" event={"ID":"90fd8fac-cdc0-402a-bb3d-746e06e28b6a","Type":"ContainerDied","Data":"86114cb4fda6d63f9418515f7580748b001ee69a0ae4f649d27fe06b8265b65d"} Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.209410 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-224bc" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.209416 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86114cb4fda6d63f9418515f7580748b001ee69a0ae4f649d27fe06b8265b65d" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.318914 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx"] Feb 16 21:24:55 crc kubenswrapper[4805]: E0216 21:24:55.319486 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fd8fac-cdc0-402a-bb3d-746e06e28b6a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.319513 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fd8fac-cdc0-402a-bb3d-746e06e28b6a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.319917 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fd8fac-cdc0-402a-bb3d-746e06e28b6a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.321383 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.326241 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.326842 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.327112 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.327468 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.358208 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx"] Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.427533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj8w7\" (UniqueName: \"kubernetes.io/projected/f7abc29d-8762-4f66-9b74-5bae943250ee-kube-api-access-kj8w7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2njwx\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.428003 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2njwx\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.428304 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2njwx\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.531327 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2njwx\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.531466 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2njwx\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.531666 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj8w7\" (UniqueName: \"kubernetes.io/projected/f7abc29d-8762-4f66-9b74-5bae943250ee-kube-api-access-kj8w7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2njwx\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.538994 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2njwx\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.547653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2njwx\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.553355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj8w7\" (UniqueName: \"kubernetes.io/projected/f7abc29d-8762-4f66-9b74-5bae943250ee-kube-api-access-kj8w7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2njwx\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:55 crc kubenswrapper[4805]: E0216 21:24:55.600399 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:24:55 crc kubenswrapper[4805]: I0216 21:24:55.650221 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:24:56 crc kubenswrapper[4805]: I0216 21:24:56.269619 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx"] Feb 16 21:24:57 crc kubenswrapper[4805]: I0216 21:24:57.242306 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" event={"ID":"f7abc29d-8762-4f66-9b74-5bae943250ee","Type":"ContainerStarted","Data":"0dc557d7661ffb281ed1c92145cea28bfb7e1c582544030cf071feebc25056d9"} Feb 16 21:24:57 crc kubenswrapper[4805]: I0216 21:24:57.242949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" event={"ID":"f7abc29d-8762-4f66-9b74-5bae943250ee","Type":"ContainerStarted","Data":"ab9b83135d63e0ba8cbba6fc194fc26d4555d078ac1dbb9c84fb91eab2872250"} Feb 16 21:24:57 crc kubenswrapper[4805]: I0216 21:24:57.266746 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" podStartSLOduration=1.8124178610000001 podStartE2EDuration="2.266711466s" podCreationTimestamp="2026-02-16 21:24:55 +0000 UTC" firstStartedPulling="2026-02-16 21:24:56.264868444 +0000 UTC m=+1714.083551779" lastFinishedPulling="2026-02-16 21:24:56.719162049 +0000 UTC m=+1714.537845384" observedRunningTime="2026-02-16 21:24:57.256654964 +0000 UTC m=+1715.075338279" watchObservedRunningTime="2026-02-16 21:24:57.266711466 +0000 UTC m=+1715.085394771" Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.089278 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6jbxd"] Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.105034 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db51-account-create-update-bxgf7"] Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.126983 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-7e8a-account-create-update-dzqhr"] Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.143791 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9sptw"] Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.158432 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-104e-account-create-update-g4bmr"] Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.173747 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db51-account-create-update-bxgf7"] Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.193794 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-7e8a-account-create-update-dzqhr"] Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.209793 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6jbxd"] Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.231785 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9sptw"] Feb 16 21:24:58 crc kubenswrapper[4805]: I0216 21:24:58.240068 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-104e-account-create-update-g4bmr"] Feb 16 21:24:59 crc kubenswrapper[4805]: I0216 21:24:59.043859 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vms5f"] Feb 16 21:24:59 crc kubenswrapper[4805]: I0216 21:24:59.060126 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vms5f"] Feb 16 21:24:59 crc kubenswrapper[4805]: I0216 21:24:59.632699 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f63756-edb7-48fb-a2b0-0c911a9f7520" path="/var/lib/kubelet/pods/18f63756-edb7-48fb-a2b0-0c911a9f7520/volumes" Feb 16 21:24:59 crc kubenswrapper[4805]: I0216 21:24:59.634063 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a79f73-956b-4a3f-896a-ec53b38e84f4" path="/var/lib/kubelet/pods/29a79f73-956b-4a3f-896a-ec53b38e84f4/volumes" Feb 16 21:24:59 crc kubenswrapper[4805]: I0216 21:24:59.635047 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea45f0e-b56b-42e5-a7e3-c30894c51f9f" path="/var/lib/kubelet/pods/2ea45f0e-b56b-42e5-a7e3-c30894c51f9f/volumes" Feb 16 21:24:59 crc kubenswrapper[4805]: I0216 21:24:59.636067 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce" path="/var/lib/kubelet/pods/3e1ca094-bee0-4e7f-a0e6-f3e9f6cb0dce/volumes" Feb 16 21:24:59 crc kubenswrapper[4805]: I0216 21:24:59.637629 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f67549-f167-4252-9aa0-d19ab787ab6b" path="/var/lib/kubelet/pods/61f67549-f167-4252-9aa0-d19ab787ab6b/volumes" Feb 16 21:24:59 crc kubenswrapper[4805]: I0216 21:24:59.638615 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb601c9-1da6-47be-b108-beb6a9cfbd03" path="/var/lib/kubelet/pods/6eb601c9-1da6-47be-b108-beb6a9cfbd03/volumes" Feb 16 21:25:02 crc kubenswrapper[4805]: E0216 21:25:02.605074 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:25:04 crc kubenswrapper[4805]: I0216 21:25:04.598649 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:25:04 crc kubenswrapper[4805]: E0216 21:25:04.600396 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:25:05 crc kubenswrapper[4805]: I0216 21:25:05.035754 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d910-account-create-update-trmdz"] Feb 16 21:25:05 crc kubenswrapper[4805]: I0216 21:25:05.054545 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d910-account-create-update-trmdz"] Feb 16 21:25:05 crc kubenswrapper[4805]: I0216 21:25:05.069518 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-g867x"] Feb 16 21:25:05 crc kubenswrapper[4805]: I0216 21:25:05.079094 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-g867x"] Feb 16 21:25:05 crc kubenswrapper[4805]: I0216 21:25:05.616355 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52022a46-b370-413d-be8d-de7c5d3ed7af" path="/var/lib/kubelet/pods/52022a46-b370-413d-be8d-de7c5d3ed7af/volumes" Feb 16 21:25:05 crc kubenswrapper[4805]: I0216 21:25:05.617493 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc7e606-c1b5-4a97-bca8-21866460d586" path="/var/lib/kubelet/pods/adc7e606-c1b5-4a97-bca8-21866460d586/volumes" Feb 16 21:25:10 crc kubenswrapper[4805]: E0216 21:25:10.600404 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:25:11 crc kubenswrapper[4805]: I0216 21:25:11.051440 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q"] Feb 16 21:25:11 crc kubenswrapper[4805]: I0216 21:25:11.062920 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-6ff9-account-create-update-smp8q"] Feb 16 21:25:11 crc kubenswrapper[4805]: I0216 21:25:11.073524 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsm5q"] Feb 16 21:25:11 crc kubenswrapper[4805]: I0216 21:25:11.083748 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-6ff9-account-create-update-smp8q"] Feb 16 21:25:11 crc kubenswrapper[4805]: I0216 21:25:11.614263 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7feac1e5-959a-468e-905c-62a5a07f98d4" path="/var/lib/kubelet/pods/7feac1e5-959a-468e-905c-62a5a07f98d4/volumes" Feb 16 21:25:11 crc kubenswrapper[4805]: I0216 21:25:11.615054 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e67718-c8cf-4669-8b07-36e2fcc68898" path="/var/lib/kubelet/pods/e7e67718-c8cf-4669-8b07-36e2fcc68898/volumes" Feb 16 21:25:15 crc kubenswrapper[4805]: E0216 21:25:15.604966 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:25:18 crc kubenswrapper[4805]: I0216 21:25:18.606295 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:25:18 crc kubenswrapper[4805]: E0216 21:25:18.607354 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:25:25 crc kubenswrapper[4805]: I0216 21:25:25.079677 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q2dfh"] Feb 16 21:25:25 crc kubenswrapper[4805]: I0216 21:25:25.091914 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q2dfh"] Feb 16 21:25:25 crc kubenswrapper[4805]: E0216 21:25:25.602980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:25:25 crc kubenswrapper[4805]: I0216 21:25:25.617009 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e172a03-3c54-4817-954f-247328c52578" path="/var/lib/kubelet/pods/5e172a03-3c54-4817-954f-247328c52578/volumes" Feb 16 21:25:29 crc kubenswrapper[4805]: I0216 21:25:29.640858 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:25:29 crc kubenswrapper[4805]: E0216 21:25:29.643418 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:25:29 crc kubenswrapper[4805]: E0216 21:25:29.643553 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.058845 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6rt5d"] Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.073181 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6rt5d"] Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.221586 4805 scope.go:117] "RemoveContainer" containerID="5bec55d553cf84066046a8e2441ec508d1a634c388468a393a2550eb567a8675" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.249385 4805 scope.go:117] "RemoveContainer" containerID="8231add64d6c1f35e48bc1951f0d37635a7b8cdb8c3bf3152d1e6a145284b077" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.307525 4805 scope.go:117] "RemoveContainer" containerID="da8ae59ebf53b3be9f879b8fcbfddf58116281b591e167acdaba570d7693881b" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.350780 4805 scope.go:117] "RemoveContainer" containerID="223456c77052e995a9836c6e84ad8d475762c56d82a2f5726cce53ed2ac54761" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.406667 4805 scope.go:117] "RemoveContainer" containerID="2104153eb99c88ca9dddf4d1a825a38debe26da9e10448b6359e306b4d3a5d60" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.471353 4805 scope.go:117] "RemoveContainer" containerID="95b1d1f1b9a13c9a6f8e6dcb85025c001b377cb336526c3111ac498121ecf773" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.529789 4805 scope.go:117] "RemoveContainer" containerID="4a214a25c55be32d623921c2482bb176cf660c264316621d3e8e0e8fa33cb184" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.548264 4805 scope.go:117] "RemoveContainer" containerID="24687ce1f74c71e60bf705a2e1013130230352b99b41c099a89ab612b3550228" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.567492 4805 scope.go:117] "RemoveContainer" containerID="67f00f3c04a6a3aa4e15e1801aa240b269ee95a26fc0e780bc78b374663a9b02" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.589261 4805 scope.go:117] "RemoveContainer" containerID="331a908851145aaf48f79e577a393098f06206e948ce58f98715c42339a9a002" Feb 16 21:25:34 crc kubenswrapper[4805]: I0216 21:25:34.610962 4805 scope.go:117] "RemoveContainer" containerID="06a4930dc414e0244111f902d77e93c78a0259a4190871ed8b8dc36f05befaab" Feb 16 21:25:35 crc kubenswrapper[4805]: I0216 21:25:35.615708 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8" path="/var/lib/kubelet/pods/b37c2e4e-f2f5-44a3-886b-91fa1a4d4ff8/volumes" Feb 16 21:25:40 crc kubenswrapper[4805]: E0216 21:25:40.600008 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:25:42 crc kubenswrapper[4805]: I0216 21:25:42.599544 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:25:42 crc kubenswrapper[4805]: E0216 21:25:42.600570 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:25:43 crc kubenswrapper[4805]: E0216 21:25:43.620391 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.039566 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-s9lw5"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.057293 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-65ee-account-create-update-ktsbv"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.072207 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0e19-account-create-update-r4lvm"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.086206 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-s9lw5"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.113700 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-n46mw"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.128861 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-65ee-account-create-update-ktsbv"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.139772 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0e19-account-create-update-r4lvm"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.154112 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8zf24"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.170895 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-n46mw"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.182853 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d98e-account-create-update-t8mk9"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.195042 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8zf24"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.205205 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d98e-account-create-update-t8mk9"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.222691 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-05bd-account-create-update-n2bst"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.243352 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-49c2p"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.256255 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-05bd-account-create-update-n2bst"] Feb 16 21:25:48 crc kubenswrapper[4805]: I0216 21:25:48.267176 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-49c2p"] Feb 16 21:25:49 crc kubenswrapper[4805]: I0216 21:25:49.621357 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c3e5581-5041-48ca-be14-1220df2a86d8" path="/var/lib/kubelet/pods/0c3e5581-5041-48ca-be14-1220df2a86d8/volumes" Feb 16 21:25:49 crc kubenswrapper[4805]: I0216 21:25:49.623560 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="449f249d-010e-41e2-8314-9dd16925c7ae" path="/var/lib/kubelet/pods/449f249d-010e-41e2-8314-9dd16925c7ae/volumes" Feb 16 21:25:49 crc kubenswrapper[4805]: I0216 21:25:49.625246 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49902100-6d13-4aa5-9e40-fd76424f5dd4" path="/var/lib/kubelet/pods/49902100-6d13-4aa5-9e40-fd76424f5dd4/volumes" Feb 16 21:25:49 crc kubenswrapper[4805]: I0216 21:25:49.626702 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e49f50-c6fb-46b5-89ae-aa379290cc57" path="/var/lib/kubelet/pods/70e49f50-c6fb-46b5-89ae-aa379290cc57/volumes" Feb 16 21:25:49 crc kubenswrapper[4805]: I0216 21:25:49.629971 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a86d7c1-b6fa-410b-abf0-3f809f09ce66" path="/var/lib/kubelet/pods/7a86d7c1-b6fa-410b-abf0-3f809f09ce66/volumes" Feb 16 21:25:49 crc kubenswrapper[4805]: I0216 21:25:49.631532 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944f457d-a34a-4f92-8172-a23175048fad" path="/var/lib/kubelet/pods/944f457d-a34a-4f92-8172-a23175048fad/volumes" Feb 16 21:25:49 crc kubenswrapper[4805]: I0216 21:25:49.632941 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9796afa-7a50-49c7-b85b-8e3075f92596" path="/var/lib/kubelet/pods/c9796afa-7a50-49c7-b85b-8e3075f92596/volumes" Feb 16 21:25:49 crc kubenswrapper[4805]: I0216 21:25:49.635838 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d5b185-9950-4ff2-b56c-278a766f3c02" path="/var/lib/kubelet/pods/e8d5b185-9950-4ff2-b56c-278a766f3c02/volumes" Feb 16 21:25:53 crc kubenswrapper[4805]: I0216 21:25:53.035754 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-shtt5"] Feb 16 21:25:53 crc kubenswrapper[4805]: I0216 21:25:53.045253 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-shtt5"] Feb 16 21:25:53 crc kubenswrapper[4805]: I0216 21:25:53.611080 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea5126d-5794-4444-968f-696bee9afc30" path="/var/lib/kubelet/pods/4ea5126d-5794-4444-968f-696bee9afc30/volumes" Feb 16 21:25:54 crc kubenswrapper[4805]: E0216 21:25:54.600668 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:25:55 crc kubenswrapper[4805]: I0216 21:25:55.598439 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:25:55 crc kubenswrapper[4805]: E0216 21:25:55.599611 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:25:57 crc kubenswrapper[4805]: E0216 21:25:57.600710 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:26:06 crc kubenswrapper[4805]: I0216 21:26:06.598405 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:26:06 crc kubenswrapper[4805]: E0216 21:26:06.599285 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:26:08 crc kubenswrapper[4805]: I0216 21:26:08.602276 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:26:08 crc kubenswrapper[4805]: E0216 21:26:08.739842 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:26:08 crc kubenswrapper[4805]: E0216 21:26:08.740215 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:26:08 crc kubenswrapper[4805]: E0216 21:26:08.740398 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:26:08 crc kubenswrapper[4805]: E0216 21:26:08.742220 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:26:12 crc kubenswrapper[4805]: E0216 21:26:12.602793 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:26:19 crc kubenswrapper[4805]: I0216 21:26:19.600229 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:26:19 crc kubenswrapper[4805]: E0216 21:26:19.602106 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:26:20 crc kubenswrapper[4805]: I0216 21:26:20.365542 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"490a8d059e400260c4694f6edba1a81d38fd229fa2bd2d72a515734efcb029e1"} Feb 16 21:26:22 crc kubenswrapper[4805]: I0216 21:26:22.057019 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8kxrn"] Feb 16 21:26:22 crc kubenswrapper[4805]: I0216 21:26:22.072309 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8kxrn"] Feb 16 21:26:23 crc kubenswrapper[4805]: I0216 21:26:23.627939 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1050edad-f277-4299-ab1d-c812bc4ae573" path="/var/lib/kubelet/pods/1050edad-f277-4299-ab1d-c812bc4ae573/volumes" Feb 16 21:26:25 crc kubenswrapper[4805]: E0216 21:26:25.703129 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:26:25 crc kubenswrapper[4805]: E0216 21:26:25.703713 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:26:25 crc kubenswrapper[4805]: E0216 21:26:25.703910 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:26:25 crc kubenswrapper[4805]: E0216 21:26:25.705137 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:26:31 crc kubenswrapper[4805]: I0216 21:26:31.067551 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-z62rr"] Feb 16 21:26:31 crc kubenswrapper[4805]: I0216 21:26:31.078706 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-z62rr"] Feb 16 21:26:31 crc kubenswrapper[4805]: E0216 21:26:31.605436 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:26:31 crc kubenswrapper[4805]: I0216 21:26:31.618656 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d50fc8fa-34b3-48cf-9e68-c474509271a3" path="/var/lib/kubelet/pods/d50fc8fa-34b3-48cf-9e68-c474509271a3/volumes" Feb 16 21:26:34 crc kubenswrapper[4805]: I0216 21:26:34.919898 4805 scope.go:117] "RemoveContainer" containerID="cebfc106de1608a7537c583999716b1f39f3a11e09531400d653eec3f941695b" Feb 16 21:26:34 crc kubenswrapper[4805]: I0216 21:26:34.954842 4805 scope.go:117] "RemoveContainer" containerID="c3d37194446c317bc153ef00e60fc26239293ddc4fabf476a2383ba85b708744" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.020553 4805 scope.go:117] "RemoveContainer" containerID="41bab549e938c4c3e30c40fb4e65fdb531eb5ac78d3d654fbf2763c1e4b63392" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.088672 4805 scope.go:117] "RemoveContainer" containerID="5f81022720a2755d99b601153f4f079e54187a9c57a8468b8d413a5c4407d35b" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.142055 4805 scope.go:117] "RemoveContainer" containerID="6398831b4bc144e181e8b8a03b045a9fb8d0b974446785722711afffce070b18" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.189848 4805 scope.go:117] "RemoveContainer" containerID="a8bf4f994787d3fd2c1a49c220460951d5500dff075ed1f3c574a020061c9ac8" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.249481 4805 scope.go:117] "RemoveContainer" containerID="3706d4fbe325e1fe22de6809fa8f9cfa06446baaedc6cab54ab55e2e12834f45" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.279998 4805 scope.go:117] "RemoveContainer" containerID="d8caf2fe733e6092a6c221917d52fd2a4997f7c77debdaa93d964748f39b47e9" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.300787 4805 scope.go:117] "RemoveContainer" containerID="edbc542b53d489b3887a95d9fd439f2cee6f62e9b4f31defc68635d2df88f123" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.322738 4805 scope.go:117] "RemoveContainer" containerID="41e0ab25fa658e62daeda20086a7cd82cbaf100a85732ed267b08193073defe2" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.362264 4805 scope.go:117] "RemoveContainer" containerID="e7ffdc4eac9f43b392b1a53d3bb8dad0dacfe134d4ac7a56c3efc3b6a9b09932" Feb 16 21:26:35 crc kubenswrapper[4805]: I0216 21:26:35.384562 4805 scope.go:117] "RemoveContainer" containerID="1ed179f65704e7a0c7294a3db5a64d5b69d9131b7dce11a4f8b893cd233cf06b" Feb 16 21:26:36 crc kubenswrapper[4805]: I0216 21:26:36.043770 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2255v"] Feb 16 21:26:36 crc kubenswrapper[4805]: I0216 21:26:36.054340 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2255v"] Feb 16 21:26:37 crc kubenswrapper[4805]: E0216 21:26:37.613334 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:26:37 crc kubenswrapper[4805]: I0216 21:26:37.629971 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3ea232-36aa-48a2-b2d4-449767fd61fb" path="/var/lib/kubelet/pods/0d3ea232-36aa-48a2-b2d4-449767fd61fb/volumes" Feb 16 21:26:42 crc kubenswrapper[4805]: E0216 21:26:42.600912 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:26:46 crc kubenswrapper[4805]: I0216 21:26:46.029601 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-txbxn"] Feb 16 21:26:46 crc kubenswrapper[4805]: I0216 21:26:46.043893 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-txbxn"] Feb 16 21:26:47 crc kubenswrapper[4805]: I0216 21:26:47.610977 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6c7759-7bcf-4efa-b50f-b73e87f20842" path="/var/lib/kubelet/pods/ab6c7759-7bcf-4efa-b50f-b73e87f20842/volumes" Feb 16 21:26:48 crc kubenswrapper[4805]: I0216 21:26:48.029146 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9ms99"] Feb 16 21:26:48 crc kubenswrapper[4805]: I0216 21:26:48.045366 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9ms99"] Feb 16 21:26:49 crc kubenswrapper[4805]: I0216 21:26:49.616451 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8125a07-0bfb-4381-80e2-bf5bb1525026" path="/var/lib/kubelet/pods/c8125a07-0bfb-4381-80e2-bf5bb1525026/volumes" Feb 16 21:26:52 crc kubenswrapper[4805]: E0216 21:26:52.601641 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:26:54 crc kubenswrapper[4805]: E0216 21:26:54.599957 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:27:07 crc kubenswrapper[4805]: E0216 21:27:07.601796 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:27:07 crc kubenswrapper[4805]: E0216 21:27:07.602574 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:27:19 crc kubenswrapper[4805]: E0216 21:27:19.599490 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:27:20 crc kubenswrapper[4805]: E0216 21:27:20.601503 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:27:31 crc kubenswrapper[4805]: E0216 21:27:31.599586 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:27:32 crc kubenswrapper[4805]: E0216 21:27:32.602470 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:27:33 crc kubenswrapper[4805]: I0216 21:27:33.062111 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-r8r58"] Feb 16 21:27:33 crc kubenswrapper[4805]: I0216 21:27:33.072704 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-r8r58"] Feb 16 21:27:33 crc kubenswrapper[4805]: I0216 21:27:33.613283 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a5da24d-77bc-444c-a344-c811c1430ea8" path="/var/lib/kubelet/pods/8a5da24d-77bc-444c-a344-c811c1430ea8/volumes" Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.047811 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-13af-account-create-update-2jbb4"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.071171 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1975-account-create-update-sbmx2"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.082267 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xckfh"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.091521 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-13af-account-create-update-2jbb4"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.122038 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xckfh"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.133967 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-11b9-account-create-update-92pwx"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.144661 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-tt79f"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.155963 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-11b9-account-create-update-92pwx"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.165201 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-tt79f"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.178164 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1975-account-create-update-sbmx2"] Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.612507 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="165cd002-0510-49a6-8322-5e2fe84e99c1" path="/var/lib/kubelet/pods/165cd002-0510-49a6-8322-5e2fe84e99c1/volumes" Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.613839 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7678f4b8-da9a-4032-a853-1ec0ec5386c0" path="/var/lib/kubelet/pods/7678f4b8-da9a-4032-a853-1ec0ec5386c0/volumes" Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.614522 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa73323e-8833-4118-8d6b-f6de2261b33c" path="/var/lib/kubelet/pods/aa73323e-8833-4118-8d6b-f6de2261b33c/volumes" Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.615276 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af9beaaa-c93c-4f38-93d4-c86d4156ad44" path="/var/lib/kubelet/pods/af9beaaa-c93c-4f38-93d4-c86d4156ad44/volumes" Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.616418 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7c489f-b7a1-4cc3-827a-3de24bd86115" path="/var/lib/kubelet/pods/dc7c489f-b7a1-4cc3-827a-3de24bd86115/volumes" Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.721301 4805 scope.go:117] "RemoveContainer" containerID="27c6504c014c38dd152e300c2982f98088bd369b700b8093230c75b2bb377dac" Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.799085 4805 scope.go:117] "RemoveContainer" containerID="8448f214ed156e900ee23963983a0f608808b4882e1b8a4183caa7b1fc178d4f" Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.924699 4805 scope.go:117] "RemoveContainer" containerID="472fda4e879fd280b1a3723d26755f78898ec620f2139b4cbb65f3f08e152152" Feb 16 21:27:35 crc kubenswrapper[4805]: I0216 21:27:35.951523 4805 scope.go:117] "RemoveContainer" containerID="1987ac6c330121034d1aeb641468f4aa56eaffe98de924216769859e65ddb5b1" Feb 16 21:27:36 crc kubenswrapper[4805]: I0216 21:27:36.003451 4805 scope.go:117] "RemoveContainer" containerID="d86b238e2ac4de09a850d02bf51b519f4562c068a1b77efdb37d56c2b91568be" Feb 16 21:27:36 crc kubenswrapper[4805]: I0216 21:27:36.061228 4805 scope.go:117] "RemoveContainer" containerID="6fafffb607f4993167f2ff552cb24d75c5da7c53288dffadde8b339c867c9ce7" Feb 16 21:27:36 crc kubenswrapper[4805]: I0216 21:27:36.124979 4805 scope.go:117] "RemoveContainer" containerID="1e4796f0357243266676f33412894b122e3852ff68bbbdfcce109d1d6908c0cc" Feb 16 21:27:36 crc kubenswrapper[4805]: I0216 21:27:36.144310 4805 scope.go:117] "RemoveContainer" containerID="f3b24048f88b0f72c516a827c497586c32e17a3bcbae6d63a088dff920ac76d0" Feb 16 21:27:36 crc kubenswrapper[4805]: I0216 21:27:36.165616 4805 scope.go:117] "RemoveContainer" containerID="c11f193ec70d131b65472c4c2ee5963c85c45e4a264e6ccff7cb89ae04ac4ac8" Feb 16 21:27:44 crc kubenswrapper[4805]: E0216 21:27:44.601298 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:27:47 crc kubenswrapper[4805]: E0216 21:27:47.600949 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:27:59 crc kubenswrapper[4805]: E0216 21:27:59.602626 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:28:00 crc kubenswrapper[4805]: E0216 21:28:00.606029 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:28:08 crc kubenswrapper[4805]: I0216 21:28:08.079112 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-86jb7"] Feb 16 21:28:08 crc kubenswrapper[4805]: I0216 21:28:08.093251 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-86jb7"] Feb 16 21:28:09 crc kubenswrapper[4805]: I0216 21:28:09.646386 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697d83c1-bcef-40ab-b260-070417df0a62" path="/var/lib/kubelet/pods/697d83c1-bcef-40ab-b260-070417df0a62/volumes" Feb 16 21:28:13 crc kubenswrapper[4805]: E0216 21:28:13.611511 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:28:14 crc kubenswrapper[4805]: E0216 21:28:14.600910 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:28:27 crc kubenswrapper[4805]: E0216 21:28:27.600836 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:28:28 crc kubenswrapper[4805]: E0216 21:28:28.602674 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:28:32 crc kubenswrapper[4805]: I0216 21:28:32.081401 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-kw7z8"] Feb 16 21:28:32 crc kubenswrapper[4805]: I0216 21:28:32.102914 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-kw7z8"] Feb 16 21:28:33 crc kubenswrapper[4805]: I0216 21:28:33.057673 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gxz95"] Feb 16 21:28:33 crc kubenswrapper[4805]: I0216 21:28:33.073406 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gxz95"] Feb 16 21:28:33 crc kubenswrapper[4805]: I0216 21:28:33.620801 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205f4efe-0a2d-4d28-a929-c89b671cefae" path="/var/lib/kubelet/pods/205f4efe-0a2d-4d28-a929-c89b671cefae/volumes" Feb 16 21:28:33 crc kubenswrapper[4805]: I0216 21:28:33.622128 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521423fc-6efd-4f61-89f3-f1523eb8e9f5" path="/var/lib/kubelet/pods/521423fc-6efd-4f61-89f3-f1523eb8e9f5/volumes" Feb 16 21:28:36 crc kubenswrapper[4805]: I0216 21:28:36.369858 4805 scope.go:117] "RemoveContainer" containerID="0d3f55d1e96ba4c67d4b54d5cd430c94cc1a55a8cc7fce91c676a45b9575e56b" Feb 16 21:28:36 crc kubenswrapper[4805]: I0216 21:28:36.445982 4805 scope.go:117] "RemoveContainer" containerID="6490e84da6532409dec05cfaae4b31e66b16ad62ad6caa714a2ffd4f6ea6c2d3" Feb 16 21:28:36 crc kubenswrapper[4805]: I0216 21:28:36.481269 4805 scope.go:117] "RemoveContainer" containerID="70c3496eaa3cdd95cb25af45b1d8d0d1dc143e9edb600988a1d0ede7d7095ac8" Feb 16 21:28:38 crc kubenswrapper[4805]: I0216 21:28:38.074809 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-mtghv"] Feb 16 21:28:38 crc kubenswrapper[4805]: I0216 21:28:38.082418 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-d63c-account-create-update-qrtgj"] Feb 16 21:28:38 crc kubenswrapper[4805]: I0216 21:28:38.090390 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-mtghv"] Feb 16 21:28:38 crc kubenswrapper[4805]: I0216 21:28:38.099028 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-d63c-account-create-update-qrtgj"] Feb 16 21:28:38 crc kubenswrapper[4805]: I0216 21:28:38.099545 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:28:38 crc kubenswrapper[4805]: I0216 21:28:38.099592 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:28:39 crc kubenswrapper[4805]: I0216 21:28:39.616553 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973a9e10-9520-4eea-90d8-2e52e480d949" path="/var/lib/kubelet/pods/973a9e10-9520-4eea-90d8-2e52e480d949/volumes" Feb 16 21:28:39 crc kubenswrapper[4805]: I0216 21:28:39.617892 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f5353d-4097-41a1-83fe-7f7747ed9fb7" path="/var/lib/kubelet/pods/d8f5353d-4097-41a1-83fe-7f7747ed9fb7/volumes" Feb 16 21:28:41 crc kubenswrapper[4805]: E0216 21:28:41.600960 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:28:43 crc kubenswrapper[4805]: E0216 21:28:43.610175 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:28:48 crc kubenswrapper[4805]: I0216 21:28:48.035724 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-xnz5m"] Feb 16 21:28:48 crc kubenswrapper[4805]: I0216 21:28:48.047426 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-xnz5m"] Feb 16 21:28:49 crc kubenswrapper[4805]: I0216 21:28:49.610399 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f843e3-43b6-405f-84be-dccbf9dbceac" path="/var/lib/kubelet/pods/c8f843e3-43b6-405f-84be-dccbf9dbceac/volumes" Feb 16 21:28:55 crc kubenswrapper[4805]: E0216 21:28:55.600454 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:28:58 crc kubenswrapper[4805]: E0216 21:28:58.602465 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:29:08 crc kubenswrapper[4805]: I0216 21:29:08.099574 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:29:08 crc kubenswrapper[4805]: I0216 21:29:08.100316 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:29:10 crc kubenswrapper[4805]: E0216 21:29:10.599928 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:29:11 crc kubenswrapper[4805]: E0216 21:29:11.599987 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:29:18 crc kubenswrapper[4805]: I0216 21:29:18.068892 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-9q4ww"] Feb 16 21:29:18 crc kubenswrapper[4805]: I0216 21:29:18.086125 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-9q4ww"] Feb 16 21:29:19 crc kubenswrapper[4805]: I0216 21:29:19.610247 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a8a573-3330-4e63-8261-ac19ae7bf18b" path="/var/lib/kubelet/pods/86a8a573-3330-4e63-8261-ac19ae7bf18b/volumes" Feb 16 21:29:21 crc kubenswrapper[4805]: E0216 21:29:21.606951 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:29:25 crc kubenswrapper[4805]: E0216 21:29:25.601148 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:29:36 crc kubenswrapper[4805]: E0216 21:29:36.605096 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:29:36 crc kubenswrapper[4805]: I0216 21:29:36.630983 4805 scope.go:117] "RemoveContainer" containerID="0c96b82eef487dc9c4b7b65e850d2b4570af7b32de589fa452373ab506c4b702" Feb 16 21:29:36 crc kubenswrapper[4805]: I0216 21:29:36.682095 4805 scope.go:117] "RemoveContainer" containerID="0a0f156a551a55e047b6b01dcf925d2bd2fcddb4120b207e38714ce69383c852" Feb 16 21:29:36 crc kubenswrapper[4805]: I0216 21:29:36.766812 4805 scope.go:117] "RemoveContainer" containerID="0d413b93ba5891014f531951463cb7133080be22fbbfbd50448026e5ca7535ba" Feb 16 21:29:36 crc kubenswrapper[4805]: I0216 21:29:36.815821 4805 scope.go:117] "RemoveContainer" containerID="d975fe4b41131b7630b710a3d9128f93ab327a1ac6bd19d0e467d51e731d6c79" Feb 16 21:29:38 crc kubenswrapper[4805]: I0216 21:29:38.099437 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:29:38 crc kubenswrapper[4805]: I0216 21:29:38.099942 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:29:38 crc kubenswrapper[4805]: I0216 21:29:38.100020 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:29:38 crc kubenswrapper[4805]: I0216 21:29:38.101357 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"490a8d059e400260c4694f6edba1a81d38fd229fa2bd2d72a515734efcb029e1"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:29:38 crc kubenswrapper[4805]: I0216 21:29:38.101486 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://490a8d059e400260c4694f6edba1a81d38fd229fa2bd2d72a515734efcb029e1" gracePeriod=600 Feb 16 21:29:38 crc kubenswrapper[4805]: E0216 21:29:38.600617 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:29:38 crc kubenswrapper[4805]: I0216 21:29:38.897908 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="490a8d059e400260c4694f6edba1a81d38fd229fa2bd2d72a515734efcb029e1" exitCode=0 Feb 16 21:29:38 crc kubenswrapper[4805]: I0216 21:29:38.898023 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"490a8d059e400260c4694f6edba1a81d38fd229fa2bd2d72a515734efcb029e1"} Feb 16 21:29:38 crc kubenswrapper[4805]: I0216 21:29:38.898154 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac"} Feb 16 21:29:38 crc kubenswrapper[4805]: I0216 21:29:38.898177 4805 scope.go:117] "RemoveContainer" containerID="a2db2992ed7d1806846bcf39eb93da5afdf931435dbc882475676a947f1ced6e" Feb 16 21:29:51 crc kubenswrapper[4805]: E0216 21:29:51.600235 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:29:52 crc kubenswrapper[4805]: E0216 21:29:52.599940 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.165652 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq"] Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.168887 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.170788 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.172033 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.181174 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq"] Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.277420 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-config-volume\") pod \"collect-profiles-29521290-rpxhq\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.277465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-secret-volume\") pod \"collect-profiles-29521290-rpxhq\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.277485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7zt\" (UniqueName: \"kubernetes.io/projected/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-kube-api-access-sj7zt\") pod \"collect-profiles-29521290-rpxhq\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.381497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-config-volume\") pod \"collect-profiles-29521290-rpxhq\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.381582 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-secret-volume\") pod \"collect-profiles-29521290-rpxhq\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.381625 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj7zt\" (UniqueName: \"kubernetes.io/projected/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-kube-api-access-sj7zt\") pod \"collect-profiles-29521290-rpxhq\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.382796 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-config-volume\") pod \"collect-profiles-29521290-rpxhq\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.392390 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-secret-volume\") pod \"collect-profiles-29521290-rpxhq\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.401656 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj7zt\" (UniqueName: \"kubernetes.io/projected/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-kube-api-access-sj7zt\") pod \"collect-profiles-29521290-rpxhq\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:00 crc kubenswrapper[4805]: I0216 21:30:00.536041 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:01 crc kubenswrapper[4805]: I0216 21:30:01.053414 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq"] Feb 16 21:30:01 crc kubenswrapper[4805]: W0216 21:30:01.064851 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6f857ec_f8a8_4b15_bb67_4fc1f0ba0ecc.slice/crio-bd4f9284cfa5e5b22777532cbccf4b8e9bb7d69d07e4b111f518b511554c1e4d WatchSource:0}: Error finding container bd4f9284cfa5e5b22777532cbccf4b8e9bb7d69d07e4b111f518b511554c1e4d: Status 404 returned error can't find the container with id bd4f9284cfa5e5b22777532cbccf4b8e9bb7d69d07e4b111f518b511554c1e4d Feb 16 21:30:01 crc kubenswrapper[4805]: I0216 21:30:01.172266 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" event={"ID":"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc","Type":"ContainerStarted","Data":"bd4f9284cfa5e5b22777532cbccf4b8e9bb7d69d07e4b111f518b511554c1e4d"} Feb 16 21:30:02 crc kubenswrapper[4805]: I0216 21:30:02.187333 4805 generic.go:334] "Generic (PLEG): container finished" podID="d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc" containerID="e6cb580316b32dff7e52490466991c2810d78f5397b32758285d0ef81c36c263" exitCode=0 Feb 16 21:30:02 crc kubenswrapper[4805]: I0216 21:30:02.187405 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" event={"ID":"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc","Type":"ContainerDied","Data":"e6cb580316b32dff7e52490466991c2810d78f5397b32758285d0ef81c36c263"} Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.655807 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.764274 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-config-volume\") pod \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.764655 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-secret-volume\") pod \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.765035 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj7zt\" (UniqueName: \"kubernetes.io/projected/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-kube-api-access-sj7zt\") pod \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\" (UID: \"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc\") " Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.765252 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-config-volume" (OuterVolumeSpecName: "config-volume") pod "d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc" (UID: "d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.766243 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.772142 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-kube-api-access-sj7zt" (OuterVolumeSpecName: "kube-api-access-sj7zt") pod "d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc" (UID: "d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc"). InnerVolumeSpecName "kube-api-access-sj7zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.772829 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc" (UID: "d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.874460 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:30:03 crc kubenswrapper[4805]: I0216 21:30:03.874534 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj7zt\" (UniqueName: \"kubernetes.io/projected/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc-kube-api-access-sj7zt\") on node \"crc\" DevicePath \"\"" Feb 16 21:30:04 crc kubenswrapper[4805]: I0216 21:30:04.215264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" event={"ID":"d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc","Type":"ContainerDied","Data":"bd4f9284cfa5e5b22777532cbccf4b8e9bb7d69d07e4b111f518b511554c1e4d"} Feb 16 21:30:04 crc kubenswrapper[4805]: I0216 21:30:04.215733 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4f9284cfa5e5b22777532cbccf4b8e9bb7d69d07e4b111f518b511554c1e4d" Feb 16 21:30:04 crc kubenswrapper[4805]: I0216 21:30:04.215388 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq" Feb 16 21:30:04 crc kubenswrapper[4805]: I0216 21:30:04.751130 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk"] Feb 16 21:30:04 crc kubenswrapper[4805]: I0216 21:30:04.762277 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521245-h42wk"] Feb 16 21:30:05 crc kubenswrapper[4805]: I0216 21:30:05.619867 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a451d6a2-4e84-4838-89be-08a88869a68e" path="/var/lib/kubelet/pods/a451d6a2-4e84-4838-89be-08a88869a68e/volumes" Feb 16 21:30:06 crc kubenswrapper[4805]: E0216 21:30:06.601462 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:30:06 crc kubenswrapper[4805]: E0216 21:30:06.601712 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.699048 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lzsqj"] Feb 16 21:30:09 crc kubenswrapper[4805]: E0216 21:30:09.701649 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc" containerName="collect-profiles" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.701800 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc" containerName="collect-profiles" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.702171 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc" containerName="collect-profiles" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.704395 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.710817 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzsqj"] Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.751814 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5wh\" (UniqueName: \"kubernetes.io/projected/cd02c782-cac6-4fd1-af90-154f57b85754-kube-api-access-kr5wh\") pod \"redhat-operators-lzsqj\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.752313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-catalog-content\") pod \"redhat-operators-lzsqj\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.752529 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-utilities\") pod \"redhat-operators-lzsqj\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.855100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-catalog-content\") pod \"redhat-operators-lzsqj\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.855230 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-utilities\") pod \"redhat-operators-lzsqj\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.855347 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr5wh\" (UniqueName: \"kubernetes.io/projected/cd02c782-cac6-4fd1-af90-154f57b85754-kube-api-access-kr5wh\") pod \"redhat-operators-lzsqj\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.855686 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-catalog-content\") pod \"redhat-operators-lzsqj\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.855832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-utilities\") pod \"redhat-operators-lzsqj\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:09 crc kubenswrapper[4805]: I0216 21:30:09.883406 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr5wh\" (UniqueName: \"kubernetes.io/projected/cd02c782-cac6-4fd1-af90-154f57b85754-kube-api-access-kr5wh\") pod \"redhat-operators-lzsqj\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:10 crc kubenswrapper[4805]: I0216 21:30:10.087855 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:10 crc kubenswrapper[4805]: I0216 21:30:10.661074 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzsqj"] Feb 16 21:30:11 crc kubenswrapper[4805]: I0216 21:30:11.314912 4805 generic.go:334] "Generic (PLEG): container finished" podID="cd02c782-cac6-4fd1-af90-154f57b85754" containerID="57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83" exitCode=0 Feb 16 21:30:11 crc kubenswrapper[4805]: I0216 21:30:11.314972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzsqj" event={"ID":"cd02c782-cac6-4fd1-af90-154f57b85754","Type":"ContainerDied","Data":"57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83"} Feb 16 21:30:11 crc kubenswrapper[4805]: I0216 21:30:11.315264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzsqj" event={"ID":"cd02c782-cac6-4fd1-af90-154f57b85754","Type":"ContainerStarted","Data":"8f715c48b1870771db4860be4bc7ad6ce9af030659cf8bec05e35a3be0668b62"} Feb 16 21:30:12 crc kubenswrapper[4805]: I0216 21:30:12.326945 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzsqj" event={"ID":"cd02c782-cac6-4fd1-af90-154f57b85754","Type":"ContainerStarted","Data":"48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6"} Feb 16 21:30:17 crc kubenswrapper[4805]: I0216 21:30:17.402023 4805 generic.go:334] "Generic (PLEG): container finished" podID="cd02c782-cac6-4fd1-af90-154f57b85754" containerID="48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6" exitCode=0 Feb 16 21:30:17 crc kubenswrapper[4805]: I0216 21:30:17.402276 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzsqj" event={"ID":"cd02c782-cac6-4fd1-af90-154f57b85754","Type":"ContainerDied","Data":"48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6"} Feb 16 21:30:18 crc kubenswrapper[4805]: I0216 21:30:18.434647 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzsqj" event={"ID":"cd02c782-cac6-4fd1-af90-154f57b85754","Type":"ContainerStarted","Data":"e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104"} Feb 16 21:30:19 crc kubenswrapper[4805]: E0216 21:30:19.599896 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:30:20 crc kubenswrapper[4805]: I0216 21:30:20.089976 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:20 crc kubenswrapper[4805]: I0216 21:30:20.090289 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:21 crc kubenswrapper[4805]: I0216 21:30:21.167647 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lzsqj" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="registry-server" probeResult="failure" output=< Feb 16 21:30:21 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:30:21 crc kubenswrapper[4805]: > Feb 16 21:30:21 crc kubenswrapper[4805]: E0216 21:30:21.599870 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:30:30 crc kubenswrapper[4805]: E0216 21:30:30.601490 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:30:31 crc kubenswrapper[4805]: I0216 21:30:31.161972 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lzsqj" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="registry-server" probeResult="failure" output=< Feb 16 21:30:31 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:30:31 crc kubenswrapper[4805]: > Feb 16 21:30:34 crc kubenswrapper[4805]: E0216 21:30:34.599374 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:30:36 crc kubenswrapper[4805]: I0216 21:30:36.993894 4805 scope.go:117] "RemoveContainer" containerID="f2648cd9bb592c1d12ae53417781e41502d03104d2820858a4a54683fcb989b4" Feb 16 21:30:40 crc kubenswrapper[4805]: I0216 21:30:40.188760 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:40 crc kubenswrapper[4805]: I0216 21:30:40.219263 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lzsqj" podStartSLOduration=24.746885929 podStartE2EDuration="31.219241022s" podCreationTimestamp="2026-02-16 21:30:09 +0000 UTC" firstStartedPulling="2026-02-16 21:30:11.31731308 +0000 UTC m=+2029.135996385" lastFinishedPulling="2026-02-16 21:30:17.789668173 +0000 UTC m=+2035.608351478" observedRunningTime="2026-02-16 21:30:18.461928121 +0000 UTC m=+2036.280611446" watchObservedRunningTime="2026-02-16 21:30:40.219241022 +0000 UTC m=+2058.037924327" Feb 16 21:30:40 crc kubenswrapper[4805]: I0216 21:30:40.273078 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:40 crc kubenswrapper[4805]: I0216 21:30:40.905865 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzsqj"] Feb 16 21:30:41 crc kubenswrapper[4805]: I0216 21:30:41.711662 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lzsqj" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="registry-server" containerID="cri-o://e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104" gracePeriod=2 Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.322378 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.371704 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-utilities\") pod \"cd02c782-cac6-4fd1-af90-154f57b85754\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.371943 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-catalog-content\") pod \"cd02c782-cac6-4fd1-af90-154f57b85754\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.372036 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr5wh\" (UniqueName: \"kubernetes.io/projected/cd02c782-cac6-4fd1-af90-154f57b85754-kube-api-access-kr5wh\") pod \"cd02c782-cac6-4fd1-af90-154f57b85754\" (UID: \"cd02c782-cac6-4fd1-af90-154f57b85754\") " Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.373302 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-utilities" (OuterVolumeSpecName: "utilities") pod "cd02c782-cac6-4fd1-af90-154f57b85754" (UID: "cd02c782-cac6-4fd1-af90-154f57b85754"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.402998 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd02c782-cac6-4fd1-af90-154f57b85754-kube-api-access-kr5wh" (OuterVolumeSpecName: "kube-api-access-kr5wh") pod "cd02c782-cac6-4fd1-af90-154f57b85754" (UID: "cd02c782-cac6-4fd1-af90-154f57b85754"). InnerVolumeSpecName "kube-api-access-kr5wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.474773 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr5wh\" (UniqueName: \"kubernetes.io/projected/cd02c782-cac6-4fd1-af90-154f57b85754-kube-api-access-kr5wh\") on node \"crc\" DevicePath \"\"" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.474808 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.505205 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd02c782-cac6-4fd1-af90-154f57b85754" (UID: "cd02c782-cac6-4fd1-af90-154f57b85754"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.577469 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd02c782-cac6-4fd1-af90-154f57b85754-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.743332 4805 generic.go:334] "Generic (PLEG): container finished" podID="cd02c782-cac6-4fd1-af90-154f57b85754" containerID="e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104" exitCode=0 Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.743556 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzsqj" event={"ID":"cd02c782-cac6-4fd1-af90-154f57b85754","Type":"ContainerDied","Data":"e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104"} Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.743614 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzsqj" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.744469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzsqj" event={"ID":"cd02c782-cac6-4fd1-af90-154f57b85754","Type":"ContainerDied","Data":"8f715c48b1870771db4860be4bc7ad6ce9af030659cf8bec05e35a3be0668b62"} Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.744486 4805 scope.go:117] "RemoveContainer" containerID="e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.802641 4805 scope.go:117] "RemoveContainer" containerID="48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.804048 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzsqj"] Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.814054 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lzsqj"] Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.838350 4805 scope.go:117] "RemoveContainer" containerID="57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.909533 4805 scope.go:117] "RemoveContainer" containerID="e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104" Feb 16 21:30:42 crc kubenswrapper[4805]: E0216 21:30:42.910376 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104\": container with ID starting with e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104 not found: ID does not exist" containerID="e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.910440 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104"} err="failed to get container status \"e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104\": rpc error: code = NotFound desc = could not find container \"e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104\": container with ID starting with e5f5200ef94ab31fb51d6321e0b06cae9685bc4fcdf0c5c44905611952dde104 not found: ID does not exist" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.910483 4805 scope.go:117] "RemoveContainer" containerID="48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6" Feb 16 21:30:42 crc kubenswrapper[4805]: E0216 21:30:42.911137 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6\": container with ID starting with 48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6 not found: ID does not exist" containerID="48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.911180 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6"} err="failed to get container status \"48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6\": rpc error: code = NotFound desc = could not find container \"48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6\": container with ID starting with 48afe9adc3ba8add68505cab5a12e8bb8beb8373d7e68e544a3f5872a84b39b6 not found: ID does not exist" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.911212 4805 scope.go:117] "RemoveContainer" containerID="57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83" Feb 16 21:30:42 crc kubenswrapper[4805]: E0216 21:30:42.911782 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83\": container with ID starting with 57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83 not found: ID does not exist" containerID="57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83" Feb 16 21:30:42 crc kubenswrapper[4805]: I0216 21:30:42.911823 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83"} err="failed to get container status \"57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83\": rpc error: code = NotFound desc = could not find container \"57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83\": container with ID starting with 57a7f1e063ca970655eb58df34429348c99daebe7e220dfe89df76aacc0feb83 not found: ID does not exist" Feb 16 21:30:43 crc kubenswrapper[4805]: E0216 21:30:43.609459 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:30:43 crc kubenswrapper[4805]: I0216 21:30:43.618660 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" path="/var/lib/kubelet/pods/cd02c782-cac6-4fd1-af90-154f57b85754/volumes" Feb 16 21:30:45 crc kubenswrapper[4805]: E0216 21:30:45.600560 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:30:57 crc kubenswrapper[4805]: E0216 21:30:57.600625 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:30:58 crc kubenswrapper[4805]: E0216 21:30:58.600428 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:31:09 crc kubenswrapper[4805]: I0216 21:31:09.601263 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:31:09 crc kubenswrapper[4805]: E0216 21:31:09.739423 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:31:09 crc kubenswrapper[4805]: E0216 21:31:09.739531 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:31:09 crc kubenswrapper[4805]: E0216 21:31:09.739796 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:31:09 crc kubenswrapper[4805]: E0216 21:31:09.741096 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:31:10 crc kubenswrapper[4805]: E0216 21:31:10.599645 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:31:22 crc kubenswrapper[4805]: E0216 21:31:22.599475 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:31:22 crc kubenswrapper[4805]: E0216 21:31:22.599492 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:31:33 crc kubenswrapper[4805]: E0216 21:31:33.749051 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:31:33 crc kubenswrapper[4805]: E0216 21:31:33.749530 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:31:33 crc kubenswrapper[4805]: E0216 21:31:33.749667 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:31:33 crc kubenswrapper[4805]: E0216 21:31:33.751260 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:31:34 crc kubenswrapper[4805]: E0216 21:31:34.599414 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:31:38 crc kubenswrapper[4805]: I0216 21:31:38.099503 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:31:38 crc kubenswrapper[4805]: I0216 21:31:38.100068 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:31:45 crc kubenswrapper[4805]: E0216 21:31:45.600876 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:31:47 crc kubenswrapper[4805]: I0216 21:31:47.444870 4805 generic.go:334] "Generic (PLEG): container finished" podID="f7abc29d-8762-4f66-9b74-5bae943250ee" containerID="0dc557d7661ffb281ed1c92145cea28bfb7e1c582544030cf071feebc25056d9" exitCode=2 Feb 16 21:31:47 crc kubenswrapper[4805]: I0216 21:31:47.444936 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" event={"ID":"f7abc29d-8762-4f66-9b74-5bae943250ee","Type":"ContainerDied","Data":"0dc557d7661ffb281ed1c92145cea28bfb7e1c582544030cf071feebc25056d9"} Feb 16 21:31:48 crc kubenswrapper[4805]: E0216 21:31:48.599749 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:31:48 crc kubenswrapper[4805]: I0216 21:31:48.924990 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.079991 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-ssh-key-openstack-edpm-ipam\") pod \"f7abc29d-8762-4f66-9b74-5bae943250ee\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.080124 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj8w7\" (UniqueName: \"kubernetes.io/projected/f7abc29d-8762-4f66-9b74-5bae943250ee-kube-api-access-kj8w7\") pod \"f7abc29d-8762-4f66-9b74-5bae943250ee\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.080244 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-inventory\") pod \"f7abc29d-8762-4f66-9b74-5bae943250ee\" (UID: \"f7abc29d-8762-4f66-9b74-5bae943250ee\") " Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.100622 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7abc29d-8762-4f66-9b74-5bae943250ee-kube-api-access-kj8w7" (OuterVolumeSpecName: "kube-api-access-kj8w7") pod "f7abc29d-8762-4f66-9b74-5bae943250ee" (UID: "f7abc29d-8762-4f66-9b74-5bae943250ee"). InnerVolumeSpecName "kube-api-access-kj8w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.130950 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-inventory" (OuterVolumeSpecName: "inventory") pod "f7abc29d-8762-4f66-9b74-5bae943250ee" (UID: "f7abc29d-8762-4f66-9b74-5bae943250ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.133683 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7abc29d-8762-4f66-9b74-5bae943250ee" (UID: "f7abc29d-8762-4f66-9b74-5bae943250ee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.183955 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.184001 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj8w7\" (UniqueName: \"kubernetes.io/projected/f7abc29d-8762-4f66-9b74-5bae943250ee-kube-api-access-kj8w7\") on node \"crc\" DevicePath \"\"" Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.184033 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7abc29d-8762-4f66-9b74-5bae943250ee-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.471248 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" event={"ID":"f7abc29d-8762-4f66-9b74-5bae943250ee","Type":"ContainerDied","Data":"ab9b83135d63e0ba8cbba6fc194fc26d4555d078ac1dbb9c84fb91eab2872250"} Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.471543 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab9b83135d63e0ba8cbba6fc194fc26d4555d078ac1dbb9c84fb91eab2872250" Feb 16 21:31:49 crc kubenswrapper[4805]: I0216 21:31:49.471295 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2njwx" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.034842 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g"] Feb 16 21:31:57 crc kubenswrapper[4805]: E0216 21:31:57.036109 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="extract-utilities" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.036127 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="extract-utilities" Feb 16 21:31:57 crc kubenswrapper[4805]: E0216 21:31:57.036145 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="registry-server" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.036154 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="registry-server" Feb 16 21:31:57 crc kubenswrapper[4805]: E0216 21:31:57.036174 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="extract-content" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.036183 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="extract-content" Feb 16 21:31:57 crc kubenswrapper[4805]: E0216 21:31:57.036225 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7abc29d-8762-4f66-9b74-5bae943250ee" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.036235 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7abc29d-8762-4f66-9b74-5bae943250ee" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.036525 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7abc29d-8762-4f66-9b74-5bae943250ee" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.036556 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd02c782-cac6-4fd1-af90-154f57b85754" containerName="registry-server" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.037637 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.040540 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.041070 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.041112 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.041145 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.047421 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g"] Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.178661 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.179154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jpm\" (UniqueName: \"kubernetes.io/projected/9a751413-e386-4261-bcb7-830a111a4399-kube-api-access-c8jpm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.179193 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.281255 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.281439 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8jpm\" (UniqueName: \"kubernetes.io/projected/9a751413-e386-4261-bcb7-830a111a4399-kube-api-access-c8jpm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.281467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.291496 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.291610 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.300464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8jpm\" (UniqueName: \"kubernetes.io/projected/9a751413-e386-4261-bcb7-830a111a4399-kube-api-access-c8jpm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.380530 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:31:57 crc kubenswrapper[4805]: I0216 21:31:57.979417 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g"] Feb 16 21:31:58 crc kubenswrapper[4805]: I0216 21:31:58.559522 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" event={"ID":"9a751413-e386-4261-bcb7-830a111a4399","Type":"ContainerStarted","Data":"ca3856806343cd8fc232dce2be157f1d753b1d74068e25588b27da5cf87a788b"} Feb 16 21:31:59 crc kubenswrapper[4805]: I0216 21:31:59.570350 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" event={"ID":"9a751413-e386-4261-bcb7-830a111a4399","Type":"ContainerStarted","Data":"5f17ac527471914f22bade86c1dd14c3c8053294743b33ebced76a265ab0c6ff"} Feb 16 21:31:59 crc kubenswrapper[4805]: I0216 21:31:59.603999 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" podStartSLOduration=2.191839153 podStartE2EDuration="2.603982484s" podCreationTimestamp="2026-02-16 21:31:57 +0000 UTC" firstStartedPulling="2026-02-16 21:31:57.970852919 +0000 UTC m=+2135.789536214" lastFinishedPulling="2026-02-16 21:31:58.38299625 +0000 UTC m=+2136.201679545" observedRunningTime="2026-02-16 21:31:59.583778197 +0000 UTC m=+2137.402461522" watchObservedRunningTime="2026-02-16 21:31:59.603982484 +0000 UTC m=+2137.422665779" Feb 16 21:32:00 crc kubenswrapper[4805]: E0216 21:32:00.600017 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:32:01 crc kubenswrapper[4805]: E0216 21:32:01.602953 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:32:08 crc kubenswrapper[4805]: I0216 21:32:08.099652 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:32:08 crc kubenswrapper[4805]: I0216 21:32:08.100134 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:32:15 crc kubenswrapper[4805]: E0216 21:32:15.602193 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:32:16 crc kubenswrapper[4805]: E0216 21:32:16.599162 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:32:26 crc kubenswrapper[4805]: E0216 21:32:26.600411 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:32:31 crc kubenswrapper[4805]: E0216 21:32:31.600682 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:32:38 crc kubenswrapper[4805]: I0216 21:32:38.099683 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:32:38 crc kubenswrapper[4805]: I0216 21:32:38.100235 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:32:38 crc kubenswrapper[4805]: I0216 21:32:38.100282 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:32:38 crc kubenswrapper[4805]: I0216 21:32:38.101159 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:32:38 crc kubenswrapper[4805]: I0216 21:32:38.101203 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" gracePeriod=600 Feb 16 21:32:38 crc kubenswrapper[4805]: E0216 21:32:38.225764 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:32:38 crc kubenswrapper[4805]: E0216 21:32:38.599681 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:32:39 crc kubenswrapper[4805]: I0216 21:32:39.006169 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" exitCode=0 Feb 16 21:32:39 crc kubenswrapper[4805]: I0216 21:32:39.006239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac"} Feb 16 21:32:39 crc kubenswrapper[4805]: I0216 21:32:39.006578 4805 scope.go:117] "RemoveContainer" containerID="490a8d059e400260c4694f6edba1a81d38fd229fa2bd2d72a515734efcb029e1" Feb 16 21:32:39 crc kubenswrapper[4805]: I0216 21:32:39.007554 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:32:39 crc kubenswrapper[4805]: E0216 21:32:39.008138 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:32:42 crc kubenswrapper[4805]: E0216 21:32:42.601105 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.078924 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9gjgr"] Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.081623 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.105364 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gjgr"] Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.167915 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-catalog-content\") pod \"certified-operators-9gjgr\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.168234 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-utilities\") pod \"certified-operators-9gjgr\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.168315 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkc95\" (UniqueName: \"kubernetes.io/projected/e098c10f-48ae-4976-afcd-bafd32bdda23-kube-api-access-dkc95\") pod \"certified-operators-9gjgr\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.270121 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-catalog-content\") pod \"certified-operators-9gjgr\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.270185 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-utilities\") pod \"certified-operators-9gjgr\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.270280 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkc95\" (UniqueName: \"kubernetes.io/projected/e098c10f-48ae-4976-afcd-bafd32bdda23-kube-api-access-dkc95\") pod \"certified-operators-9gjgr\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.270862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-catalog-content\") pod \"certified-operators-9gjgr\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.270923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-utilities\") pod \"certified-operators-9gjgr\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.293544 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkc95\" (UniqueName: \"kubernetes.io/projected/e098c10f-48ae-4976-afcd-bafd32bdda23-kube-api-access-dkc95\") pod \"certified-operators-9gjgr\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.417991 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:43 crc kubenswrapper[4805]: I0216 21:32:43.941200 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gjgr"] Feb 16 21:32:44 crc kubenswrapper[4805]: I0216 21:32:44.066250 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gjgr" event={"ID":"e098c10f-48ae-4976-afcd-bafd32bdda23","Type":"ContainerStarted","Data":"8f1a80c994c412ef1c8092911e893ca5131765e5fde1900f837d4e7665240de9"} Feb 16 21:32:45 crc kubenswrapper[4805]: I0216 21:32:45.078016 4805 generic.go:334] "Generic (PLEG): container finished" podID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerID="9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8" exitCode=0 Feb 16 21:32:45 crc kubenswrapper[4805]: I0216 21:32:45.078370 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gjgr" event={"ID":"e098c10f-48ae-4976-afcd-bafd32bdda23","Type":"ContainerDied","Data":"9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8"} Feb 16 21:32:47 crc kubenswrapper[4805]: I0216 21:32:47.108862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gjgr" event={"ID":"e098c10f-48ae-4976-afcd-bafd32bdda23","Type":"ContainerStarted","Data":"ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559"} Feb 16 21:32:48 crc kubenswrapper[4805]: I0216 21:32:48.120676 4805 generic.go:334] "Generic (PLEG): container finished" podID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerID="ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559" exitCode=0 Feb 16 21:32:48 crc kubenswrapper[4805]: I0216 21:32:48.120780 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gjgr" event={"ID":"e098c10f-48ae-4976-afcd-bafd32bdda23","Type":"ContainerDied","Data":"ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559"} Feb 16 21:32:49 crc kubenswrapper[4805]: I0216 21:32:49.148694 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gjgr" event={"ID":"e098c10f-48ae-4976-afcd-bafd32bdda23","Type":"ContainerStarted","Data":"5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c"} Feb 16 21:32:49 crc kubenswrapper[4805]: I0216 21:32:49.186311 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9gjgr" podStartSLOduration=2.757096813 podStartE2EDuration="6.186291021s" podCreationTimestamp="2026-02-16 21:32:43 +0000 UTC" firstStartedPulling="2026-02-16 21:32:45.081739961 +0000 UTC m=+2182.900423256" lastFinishedPulling="2026-02-16 21:32:48.510934169 +0000 UTC m=+2186.329617464" observedRunningTime="2026-02-16 21:32:49.169072995 +0000 UTC m=+2186.987756290" watchObservedRunningTime="2026-02-16 21:32:49.186291021 +0000 UTC m=+2187.004974316" Feb 16 21:32:51 crc kubenswrapper[4805]: I0216 21:32:51.598675 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:32:51 crc kubenswrapper[4805]: E0216 21:32:51.600307 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:32:52 crc kubenswrapper[4805]: E0216 21:32:52.600173 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:32:53 crc kubenswrapper[4805]: I0216 21:32:53.419171 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:53 crc kubenswrapper[4805]: I0216 21:32:53.419233 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:53 crc kubenswrapper[4805]: I0216 21:32:53.499129 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:54 crc kubenswrapper[4805]: I0216 21:32:54.306885 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:54 crc kubenswrapper[4805]: I0216 21:32:54.382766 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9gjgr"] Feb 16 21:32:56 crc kubenswrapper[4805]: I0216 21:32:56.231059 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9gjgr" podUID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerName="registry-server" containerID="cri-o://5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c" gracePeriod=2 Feb 16 21:32:56 crc kubenswrapper[4805]: E0216 21:32:56.600049 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:32:56 crc kubenswrapper[4805]: I0216 21:32:56.823413 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:56 crc kubenswrapper[4805]: I0216 21:32:56.931783 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-catalog-content\") pod \"e098c10f-48ae-4976-afcd-bafd32bdda23\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " Feb 16 21:32:56 crc kubenswrapper[4805]: I0216 21:32:56.932353 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-utilities\") pod \"e098c10f-48ae-4976-afcd-bafd32bdda23\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " Feb 16 21:32:56 crc kubenswrapper[4805]: I0216 21:32:56.932507 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkc95\" (UniqueName: \"kubernetes.io/projected/e098c10f-48ae-4976-afcd-bafd32bdda23-kube-api-access-dkc95\") pod \"e098c10f-48ae-4976-afcd-bafd32bdda23\" (UID: \"e098c10f-48ae-4976-afcd-bafd32bdda23\") " Feb 16 21:32:56 crc kubenswrapper[4805]: I0216 21:32:56.933336 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-utilities" (OuterVolumeSpecName: "utilities") pod "e098c10f-48ae-4976-afcd-bafd32bdda23" (UID: "e098c10f-48ae-4976-afcd-bafd32bdda23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:32:56 crc kubenswrapper[4805]: I0216 21:32:56.943223 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e098c10f-48ae-4976-afcd-bafd32bdda23-kube-api-access-dkc95" (OuterVolumeSpecName: "kube-api-access-dkc95") pod "e098c10f-48ae-4976-afcd-bafd32bdda23" (UID: "e098c10f-48ae-4976-afcd-bafd32bdda23"). InnerVolumeSpecName "kube-api-access-dkc95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:32:56 crc kubenswrapper[4805]: I0216 21:32:56.979652 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e098c10f-48ae-4976-afcd-bafd32bdda23" (UID: "e098c10f-48ae-4976-afcd-bafd32bdda23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.035941 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.036058 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkc95\" (UniqueName: \"kubernetes.io/projected/e098c10f-48ae-4976-afcd-bafd32bdda23-kube-api-access-dkc95\") on node \"crc\" DevicePath \"\"" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.036073 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e098c10f-48ae-4976-afcd-bafd32bdda23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.256684 4805 generic.go:334] "Generic (PLEG): container finished" podID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerID="5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c" exitCode=0 Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.256748 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gjgr" event={"ID":"e098c10f-48ae-4976-afcd-bafd32bdda23","Type":"ContainerDied","Data":"5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c"} Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.256849 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gjgr" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.256873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gjgr" event={"ID":"e098c10f-48ae-4976-afcd-bafd32bdda23","Type":"ContainerDied","Data":"8f1a80c994c412ef1c8092911e893ca5131765e5fde1900f837d4e7665240de9"} Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.256908 4805 scope.go:117] "RemoveContainer" containerID="5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.316142 4805 scope.go:117] "RemoveContainer" containerID="ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.319717 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9gjgr"] Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.329205 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9gjgr"] Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.362033 4805 scope.go:117] "RemoveContainer" containerID="9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.417311 4805 scope.go:117] "RemoveContainer" containerID="5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c" Feb 16 21:32:57 crc kubenswrapper[4805]: E0216 21:32:57.418050 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c\": container with ID starting with 5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c not found: ID does not exist" containerID="5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.418112 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c"} err="failed to get container status \"5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c\": rpc error: code = NotFound desc = could not find container \"5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c\": container with ID starting with 5a9336e2046b2a35dbaaa28a228408b05c918585540bdbb2ce94212e3f01f27c not found: ID does not exist" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.418144 4805 scope.go:117] "RemoveContainer" containerID="ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559" Feb 16 21:32:57 crc kubenswrapper[4805]: E0216 21:32:57.418569 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559\": container with ID starting with ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559 not found: ID does not exist" containerID="ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.418631 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559"} err="failed to get container status \"ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559\": rpc error: code = NotFound desc = could not find container \"ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559\": container with ID starting with ac0b1140cf7d3be285d64b2a9237ae491238eed36cc7ac65d47755afbbc6e559 not found: ID does not exist" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.418662 4805 scope.go:117] "RemoveContainer" containerID="9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8" Feb 16 21:32:57 crc kubenswrapper[4805]: E0216 21:32:57.419071 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8\": container with ID starting with 9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8 not found: ID does not exist" containerID="9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.419116 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8"} err="failed to get container status \"9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8\": rpc error: code = NotFound desc = could not find container \"9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8\": container with ID starting with 9e8d36aaf258e0a450da5738b1e1ee574ac2021bb0880a4cbde22b08178b14a8 not found: ID does not exist" Feb 16 21:32:57 crc kubenswrapper[4805]: I0216 21:32:57.619629 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e098c10f-48ae-4976-afcd-bafd32bdda23" path="/var/lib/kubelet/pods/e098c10f-48ae-4976-afcd-bafd32bdda23/volumes" Feb 16 21:33:03 crc kubenswrapper[4805]: I0216 21:33:03.607505 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:33:03 crc kubenswrapper[4805]: E0216 21:33:03.608524 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:33:04 crc kubenswrapper[4805]: E0216 21:33:04.600207 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.125890 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bqh25"] Feb 16 21:33:09 crc kubenswrapper[4805]: E0216 21:33:09.127008 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerName="extract-utilities" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.127024 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerName="extract-utilities" Feb 16 21:33:09 crc kubenswrapper[4805]: E0216 21:33:09.127055 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerName="registry-server" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.127061 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerName="registry-server" Feb 16 21:33:09 crc kubenswrapper[4805]: E0216 21:33:09.127097 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerName="extract-content" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.127103 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerName="extract-content" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.127319 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e098c10f-48ae-4976-afcd-bafd32bdda23" containerName="registry-server" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.129351 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.165424 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bqh25"] Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.273516 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-catalog-content\") pod \"redhat-marketplace-bqh25\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.273894 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkdv\" (UniqueName: \"kubernetes.io/projected/340cce1d-2140-400e-aa99-43bdf890f383-kube-api-access-4rkdv\") pod \"redhat-marketplace-bqh25\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.275034 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-utilities\") pod \"redhat-marketplace-bqh25\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.377634 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-utilities\") pod \"redhat-marketplace-bqh25\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.377749 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-catalog-content\") pod \"redhat-marketplace-bqh25\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.377915 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rkdv\" (UniqueName: \"kubernetes.io/projected/340cce1d-2140-400e-aa99-43bdf890f383-kube-api-access-4rkdv\") pod \"redhat-marketplace-bqh25\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.378151 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-utilities\") pod \"redhat-marketplace-bqh25\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.378600 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-catalog-content\") pod \"redhat-marketplace-bqh25\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.398142 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rkdv\" (UniqueName: \"kubernetes.io/projected/340cce1d-2140-400e-aa99-43bdf890f383-kube-api-access-4rkdv\") pod \"redhat-marketplace-bqh25\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.510979 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:09 crc kubenswrapper[4805]: E0216 21:33:09.606559 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:33:09 crc kubenswrapper[4805]: I0216 21:33:09.975863 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bqh25"] Feb 16 21:33:10 crc kubenswrapper[4805]: I0216 21:33:10.407178 4805 generic.go:334] "Generic (PLEG): container finished" podID="340cce1d-2140-400e-aa99-43bdf890f383" containerID="37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b" exitCode=0 Feb 16 21:33:10 crc kubenswrapper[4805]: I0216 21:33:10.407240 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bqh25" event={"ID":"340cce1d-2140-400e-aa99-43bdf890f383","Type":"ContainerDied","Data":"37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b"} Feb 16 21:33:10 crc kubenswrapper[4805]: I0216 21:33:10.407291 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bqh25" event={"ID":"340cce1d-2140-400e-aa99-43bdf890f383","Type":"ContainerStarted","Data":"8e8af1620edde3683abd2f9561c8bf7c675a3b03917cf5d87d4c196015393019"} Feb 16 21:33:11 crc kubenswrapper[4805]: I0216 21:33:11.419530 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bqh25" event={"ID":"340cce1d-2140-400e-aa99-43bdf890f383","Type":"ContainerStarted","Data":"bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af"} Feb 16 21:33:12 crc kubenswrapper[4805]: I0216 21:33:12.446564 4805 generic.go:334] "Generic (PLEG): container finished" podID="340cce1d-2140-400e-aa99-43bdf890f383" containerID="bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af" exitCode=0 Feb 16 21:33:12 crc kubenswrapper[4805]: I0216 21:33:12.446689 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bqh25" event={"ID":"340cce1d-2140-400e-aa99-43bdf890f383","Type":"ContainerDied","Data":"bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af"} Feb 16 21:33:13 crc kubenswrapper[4805]: I0216 21:33:13.459625 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bqh25" event={"ID":"340cce1d-2140-400e-aa99-43bdf890f383","Type":"ContainerStarted","Data":"646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba"} Feb 16 21:33:14 crc kubenswrapper[4805]: I0216 21:33:14.598031 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:33:14 crc kubenswrapper[4805]: E0216 21:33:14.598327 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:33:17 crc kubenswrapper[4805]: E0216 21:33:17.601660 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:33:19 crc kubenswrapper[4805]: I0216 21:33:19.511705 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:19 crc kubenswrapper[4805]: I0216 21:33:19.512101 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:19 crc kubenswrapper[4805]: I0216 21:33:19.576445 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:19 crc kubenswrapper[4805]: I0216 21:33:19.612427 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bqh25" podStartSLOduration=7.881572138 podStartE2EDuration="10.612408593s" podCreationTimestamp="2026-02-16 21:33:09 +0000 UTC" firstStartedPulling="2026-02-16 21:33:10.408904077 +0000 UTC m=+2208.227587372" lastFinishedPulling="2026-02-16 21:33:13.139740532 +0000 UTC m=+2210.958423827" observedRunningTime="2026-02-16 21:33:13.483224274 +0000 UTC m=+2211.301907569" watchObservedRunningTime="2026-02-16 21:33:19.612408593 +0000 UTC m=+2217.431091888" Feb 16 21:33:19 crc kubenswrapper[4805]: I0216 21:33:19.651763 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:19 crc kubenswrapper[4805]: I0216 21:33:19.830518 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bqh25"] Feb 16 21:33:21 crc kubenswrapper[4805]: I0216 21:33:21.558825 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bqh25" podUID="340cce1d-2140-400e-aa99-43bdf890f383" containerName="registry-server" containerID="cri-o://646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba" gracePeriod=2 Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.122062 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.212116 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-utilities\") pod \"340cce1d-2140-400e-aa99-43bdf890f383\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.212769 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rkdv\" (UniqueName: \"kubernetes.io/projected/340cce1d-2140-400e-aa99-43bdf890f383-kube-api-access-4rkdv\") pod \"340cce1d-2140-400e-aa99-43bdf890f383\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.212823 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-utilities" (OuterVolumeSpecName: "utilities") pod "340cce1d-2140-400e-aa99-43bdf890f383" (UID: "340cce1d-2140-400e-aa99-43bdf890f383"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.212914 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-catalog-content\") pod \"340cce1d-2140-400e-aa99-43bdf890f383\" (UID: \"340cce1d-2140-400e-aa99-43bdf890f383\") " Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.213633 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.224039 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340cce1d-2140-400e-aa99-43bdf890f383-kube-api-access-4rkdv" (OuterVolumeSpecName: "kube-api-access-4rkdv") pod "340cce1d-2140-400e-aa99-43bdf890f383" (UID: "340cce1d-2140-400e-aa99-43bdf890f383"). InnerVolumeSpecName "kube-api-access-4rkdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.254025 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "340cce1d-2140-400e-aa99-43bdf890f383" (UID: "340cce1d-2140-400e-aa99-43bdf890f383"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.316281 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rkdv\" (UniqueName: \"kubernetes.io/projected/340cce1d-2140-400e-aa99-43bdf890f383-kube-api-access-4rkdv\") on node \"crc\" DevicePath \"\"" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.316314 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340cce1d-2140-400e-aa99-43bdf890f383-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.572538 4805 generic.go:334] "Generic (PLEG): container finished" podID="340cce1d-2140-400e-aa99-43bdf890f383" containerID="646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba" exitCode=0 Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.572626 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bqh25" event={"ID":"340cce1d-2140-400e-aa99-43bdf890f383","Type":"ContainerDied","Data":"646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba"} Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.572678 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bqh25" event={"ID":"340cce1d-2140-400e-aa99-43bdf890f383","Type":"ContainerDied","Data":"8e8af1620edde3683abd2f9561c8bf7c675a3b03917cf5d87d4c196015393019"} Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.572739 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bqh25" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.572748 4805 scope.go:117] "RemoveContainer" containerID="646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.597616 4805 scope.go:117] "RemoveContainer" containerID="bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af" Feb 16 21:33:22 crc kubenswrapper[4805]: E0216 21:33:22.601345 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.613117 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bqh25"] Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.623398 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bqh25"] Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.633266 4805 scope.go:117] "RemoveContainer" containerID="37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.724571 4805 scope.go:117] "RemoveContainer" containerID="646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba" Feb 16 21:33:22 crc kubenswrapper[4805]: E0216 21:33:22.725003 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba\": container with ID starting with 646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba not found: ID does not exist" containerID="646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.725035 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba"} err="failed to get container status \"646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba\": rpc error: code = NotFound desc = could not find container \"646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba\": container with ID starting with 646f9e5d227fde53193c4b78fb5ddd055d084fdd87f74e721d28aeb605cfd0ba not found: ID does not exist" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.725056 4805 scope.go:117] "RemoveContainer" containerID="bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af" Feb 16 21:33:22 crc kubenswrapper[4805]: E0216 21:33:22.725545 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af\": container with ID starting with bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af not found: ID does not exist" containerID="bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.725569 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af"} err="failed to get container status \"bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af\": rpc error: code = NotFound desc = could not find container \"bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af\": container with ID starting with bca64300bf122283e81cb1757c327609d572171c3f81e31181d4bd1319fd06af not found: ID does not exist" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.725583 4805 scope.go:117] "RemoveContainer" containerID="37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b" Feb 16 21:33:22 crc kubenswrapper[4805]: E0216 21:33:22.726079 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b\": container with ID starting with 37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b not found: ID does not exist" containerID="37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b" Feb 16 21:33:22 crc kubenswrapper[4805]: I0216 21:33:22.726117 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b"} err="failed to get container status \"37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b\": rpc error: code = NotFound desc = could not find container \"37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b\": container with ID starting with 37e4a9d8625024d7983647be5c5fc72cda8b978486f35497d92e6ca4c331c51b not found: ID does not exist" Feb 16 21:33:23 crc kubenswrapper[4805]: I0216 21:33:23.620423 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="340cce1d-2140-400e-aa99-43bdf890f383" path="/var/lib/kubelet/pods/340cce1d-2140-400e-aa99-43bdf890f383/volumes" Feb 16 21:33:25 crc kubenswrapper[4805]: I0216 21:33:25.601856 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:33:25 crc kubenswrapper[4805]: E0216 21:33:25.602939 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:33:30 crc kubenswrapper[4805]: E0216 21:33:30.601295 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:33:35 crc kubenswrapper[4805]: E0216 21:33:35.601545 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:33:38 crc kubenswrapper[4805]: I0216 21:33:38.598332 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:33:38 crc kubenswrapper[4805]: E0216 21:33:38.599770 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:33:43 crc kubenswrapper[4805]: E0216 21:33:43.615252 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:33:50 crc kubenswrapper[4805]: E0216 21:33:50.600583 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:33:51 crc kubenswrapper[4805]: I0216 21:33:51.598741 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:33:51 crc kubenswrapper[4805]: E0216 21:33:51.599632 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:33:58 crc kubenswrapper[4805]: E0216 21:33:58.601398 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:34:01 crc kubenswrapper[4805]: E0216 21:34:01.608027 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:34:06 crc kubenswrapper[4805]: I0216 21:34:06.598114 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:34:06 crc kubenswrapper[4805]: E0216 21:34:06.599643 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:34:10 crc kubenswrapper[4805]: E0216 21:34:10.606390 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:34:13 crc kubenswrapper[4805]: E0216 21:34:13.609151 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:34:18 crc kubenswrapper[4805]: I0216 21:34:18.599195 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:34:18 crc kubenswrapper[4805]: E0216 21:34:18.599988 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:34:25 crc kubenswrapper[4805]: E0216 21:34:25.602930 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:34:25 crc kubenswrapper[4805]: E0216 21:34:25.602941 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:34:33 crc kubenswrapper[4805]: I0216 21:34:33.609758 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:34:33 crc kubenswrapper[4805]: E0216 21:34:33.610601 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:34:38 crc kubenswrapper[4805]: E0216 21:34:38.601788 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:34:38 crc kubenswrapper[4805]: E0216 21:34:38.601793 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:34:46 crc kubenswrapper[4805]: I0216 21:34:46.598631 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:34:46 crc kubenswrapper[4805]: E0216 21:34:46.599839 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.144599 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lggmf"] Feb 16 21:34:48 crc kubenswrapper[4805]: E0216 21:34:48.145564 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340cce1d-2140-400e-aa99-43bdf890f383" containerName="registry-server" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.145581 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="340cce1d-2140-400e-aa99-43bdf890f383" containerName="registry-server" Feb 16 21:34:48 crc kubenswrapper[4805]: E0216 21:34:48.145615 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340cce1d-2140-400e-aa99-43bdf890f383" containerName="extract-utilities" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.145624 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="340cce1d-2140-400e-aa99-43bdf890f383" containerName="extract-utilities" Feb 16 21:34:48 crc kubenswrapper[4805]: E0216 21:34:48.145670 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340cce1d-2140-400e-aa99-43bdf890f383" containerName="extract-content" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.145679 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="340cce1d-2140-400e-aa99-43bdf890f383" containerName="extract-content" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.146000 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="340cce1d-2140-400e-aa99-43bdf890f383" containerName="registry-server" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.165004 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.173864 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lggmf"] Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.292226 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-utilities\") pod \"community-operators-lggmf\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.292625 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qhlr\" (UniqueName: \"kubernetes.io/projected/ba1e7203-5404-4a0a-97f4-13742b74c14d-kube-api-access-2qhlr\") pod \"community-operators-lggmf\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.292801 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-catalog-content\") pod \"community-operators-lggmf\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.394675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qhlr\" (UniqueName: \"kubernetes.io/projected/ba1e7203-5404-4a0a-97f4-13742b74c14d-kube-api-access-2qhlr\") pod \"community-operators-lggmf\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.394873 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-catalog-content\") pod \"community-operators-lggmf\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.394997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-utilities\") pod \"community-operators-lggmf\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.395421 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-catalog-content\") pod \"community-operators-lggmf\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.395505 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-utilities\") pod \"community-operators-lggmf\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.421253 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qhlr\" (UniqueName: \"kubernetes.io/projected/ba1e7203-5404-4a0a-97f4-13742b74c14d-kube-api-access-2qhlr\") pod \"community-operators-lggmf\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:48 crc kubenswrapper[4805]: I0216 21:34:48.511752 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:49 crc kubenswrapper[4805]: I0216 21:34:49.031593 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lggmf"] Feb 16 21:34:49 crc kubenswrapper[4805]: I0216 21:34:49.618668 4805 generic.go:334] "Generic (PLEG): container finished" podID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerID="89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152" exitCode=0 Feb 16 21:34:49 crc kubenswrapper[4805]: I0216 21:34:49.618759 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lggmf" event={"ID":"ba1e7203-5404-4a0a-97f4-13742b74c14d","Type":"ContainerDied","Data":"89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152"} Feb 16 21:34:49 crc kubenswrapper[4805]: I0216 21:34:49.619038 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lggmf" event={"ID":"ba1e7203-5404-4a0a-97f4-13742b74c14d","Type":"ContainerStarted","Data":"f4c1cc8a6b696cf71d56512ec206070d758525d9b6e1b7590f0da65d5855b4c3"} Feb 16 21:34:50 crc kubenswrapper[4805]: E0216 21:34:50.599241 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:34:51 crc kubenswrapper[4805]: I0216 21:34:51.643379 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lggmf" event={"ID":"ba1e7203-5404-4a0a-97f4-13742b74c14d","Type":"ContainerStarted","Data":"89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f"} Feb 16 21:34:52 crc kubenswrapper[4805]: E0216 21:34:52.599579 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:34:52 crc kubenswrapper[4805]: I0216 21:34:52.657054 4805 generic.go:334] "Generic (PLEG): container finished" podID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerID="89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f" exitCode=0 Feb 16 21:34:52 crc kubenswrapper[4805]: I0216 21:34:52.657113 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lggmf" event={"ID":"ba1e7203-5404-4a0a-97f4-13742b74c14d","Type":"ContainerDied","Data":"89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f"} Feb 16 21:34:53 crc kubenswrapper[4805]: I0216 21:34:53.672208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lggmf" event={"ID":"ba1e7203-5404-4a0a-97f4-13742b74c14d","Type":"ContainerStarted","Data":"a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661"} Feb 16 21:34:53 crc kubenswrapper[4805]: I0216 21:34:53.700308 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lggmf" podStartSLOduration=2.204786403 podStartE2EDuration="5.700286181s" podCreationTimestamp="2026-02-16 21:34:48 +0000 UTC" firstStartedPulling="2026-02-16 21:34:49.62074789 +0000 UTC m=+2307.439431195" lastFinishedPulling="2026-02-16 21:34:53.116247678 +0000 UTC m=+2310.934930973" observedRunningTime="2026-02-16 21:34:53.698227006 +0000 UTC m=+2311.516910381" watchObservedRunningTime="2026-02-16 21:34:53.700286181 +0000 UTC m=+2311.518969476" Feb 16 21:34:57 crc kubenswrapper[4805]: I0216 21:34:57.598085 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:34:57 crc kubenswrapper[4805]: E0216 21:34:57.599041 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:34:58 crc kubenswrapper[4805]: I0216 21:34:58.511907 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:58 crc kubenswrapper[4805]: I0216 21:34:58.512354 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:58 crc kubenswrapper[4805]: I0216 21:34:58.592777 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:58 crc kubenswrapper[4805]: I0216 21:34:58.803129 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:34:58 crc kubenswrapper[4805]: I0216 21:34:58.870560 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lggmf"] Feb 16 21:35:00 crc kubenswrapper[4805]: I0216 21:35:00.754579 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lggmf" podUID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerName="registry-server" containerID="cri-o://a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661" gracePeriod=2 Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.388283 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.536248 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-catalog-content\") pod \"ba1e7203-5404-4a0a-97f4-13742b74c14d\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.536487 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-utilities\") pod \"ba1e7203-5404-4a0a-97f4-13742b74c14d\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.536604 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qhlr\" (UniqueName: \"kubernetes.io/projected/ba1e7203-5404-4a0a-97f4-13742b74c14d-kube-api-access-2qhlr\") pod \"ba1e7203-5404-4a0a-97f4-13742b74c14d\" (UID: \"ba1e7203-5404-4a0a-97f4-13742b74c14d\") " Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.539566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-utilities" (OuterVolumeSpecName: "utilities") pod "ba1e7203-5404-4a0a-97f4-13742b74c14d" (UID: "ba1e7203-5404-4a0a-97f4-13742b74c14d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.547923 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1e7203-5404-4a0a-97f4-13742b74c14d-kube-api-access-2qhlr" (OuterVolumeSpecName: "kube-api-access-2qhlr") pod "ba1e7203-5404-4a0a-97f4-13742b74c14d" (UID: "ba1e7203-5404-4a0a-97f4-13742b74c14d"). InnerVolumeSpecName "kube-api-access-2qhlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.640348 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.640393 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qhlr\" (UniqueName: \"kubernetes.io/projected/ba1e7203-5404-4a0a-97f4-13742b74c14d-kube-api-access-2qhlr\") on node \"crc\" DevicePath \"\"" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.641183 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba1e7203-5404-4a0a-97f4-13742b74c14d" (UID: "ba1e7203-5404-4a0a-97f4-13742b74c14d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.743287 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1e7203-5404-4a0a-97f4-13742b74c14d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.770791 4805 generic.go:334] "Generic (PLEG): container finished" podID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerID="a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661" exitCode=0 Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.770839 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lggmf" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.770855 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lggmf" event={"ID":"ba1e7203-5404-4a0a-97f4-13742b74c14d","Type":"ContainerDied","Data":"a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661"} Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.770894 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lggmf" event={"ID":"ba1e7203-5404-4a0a-97f4-13742b74c14d","Type":"ContainerDied","Data":"f4c1cc8a6b696cf71d56512ec206070d758525d9b6e1b7590f0da65d5855b4c3"} Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.770922 4805 scope.go:117] "RemoveContainer" containerID="a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.813451 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lggmf"] Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.822055 4805 scope.go:117] "RemoveContainer" containerID="89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.827687 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lggmf"] Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.843975 4805 scope.go:117] "RemoveContainer" containerID="89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.917285 4805 scope.go:117] "RemoveContainer" containerID="a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661" Feb 16 21:35:01 crc kubenswrapper[4805]: E0216 21:35:01.917639 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661\": container with ID starting with a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661 not found: ID does not exist" containerID="a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.917674 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661"} err="failed to get container status \"a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661\": rpc error: code = NotFound desc = could not find container \"a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661\": container with ID starting with a8944de24a1d8923f7a7fac6cd3100bc078038b2d61cd2164601088b12f74661 not found: ID does not exist" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.917701 4805 scope.go:117] "RemoveContainer" containerID="89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f" Feb 16 21:35:01 crc kubenswrapper[4805]: E0216 21:35:01.917986 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f\": container with ID starting with 89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f not found: ID does not exist" containerID="89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.918010 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f"} err="failed to get container status \"89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f\": rpc error: code = NotFound desc = could not find container \"89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f\": container with ID starting with 89ef883319bd32bc1f2527d42653720b11968413cbafb4019d53c4a05ed7d84f not found: ID does not exist" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.918026 4805 scope.go:117] "RemoveContainer" containerID="89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152" Feb 16 21:35:01 crc kubenswrapper[4805]: E0216 21:35:01.918283 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152\": container with ID starting with 89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152 not found: ID does not exist" containerID="89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152" Feb 16 21:35:01 crc kubenswrapper[4805]: I0216 21:35:01.918312 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152"} err="failed to get container status \"89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152\": rpc error: code = NotFound desc = could not find container \"89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152\": container with ID starting with 89dd80fbdf7725e369fee558d59f0032ea5f08b3488c5003a0b5660fb66e4152 not found: ID does not exist" Feb 16 21:35:03 crc kubenswrapper[4805]: E0216 21:35:03.610050 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:35:03 crc kubenswrapper[4805]: E0216 21:35:03.610210 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:35:03 crc kubenswrapper[4805]: I0216 21:35:03.618831 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1e7203-5404-4a0a-97f4-13742b74c14d" path="/var/lib/kubelet/pods/ba1e7203-5404-4a0a-97f4-13742b74c14d/volumes" Feb 16 21:35:11 crc kubenswrapper[4805]: I0216 21:35:11.599696 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:35:11 crc kubenswrapper[4805]: E0216 21:35:11.600636 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:35:14 crc kubenswrapper[4805]: E0216 21:35:14.602528 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:35:17 crc kubenswrapper[4805]: E0216 21:35:17.600759 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:35:23 crc kubenswrapper[4805]: I0216 21:35:23.605321 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:35:23 crc kubenswrapper[4805]: E0216 21:35:23.606281 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:35:26 crc kubenswrapper[4805]: E0216 21:35:26.602335 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:35:29 crc kubenswrapper[4805]: E0216 21:35:29.602689 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:35:36 crc kubenswrapper[4805]: I0216 21:35:36.598229 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:35:36 crc kubenswrapper[4805]: E0216 21:35:36.599114 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:35:38 crc kubenswrapper[4805]: E0216 21:35:38.600004 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:35:44 crc kubenswrapper[4805]: E0216 21:35:44.600545 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:35:49 crc kubenswrapper[4805]: I0216 21:35:49.598080 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:35:49 crc kubenswrapper[4805]: E0216 21:35:49.599102 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:35:52 crc kubenswrapper[4805]: E0216 21:35:52.601607 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:35:56 crc kubenswrapper[4805]: E0216 21:35:56.600881 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:36:02 crc kubenswrapper[4805]: I0216 21:36:02.599167 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:36:02 crc kubenswrapper[4805]: E0216 21:36:02.600143 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:36:04 crc kubenswrapper[4805]: E0216 21:36:04.600622 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:36:07 crc kubenswrapper[4805]: E0216 21:36:07.600638 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:36:14 crc kubenswrapper[4805]: I0216 21:36:14.598950 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:36:14 crc kubenswrapper[4805]: E0216 21:36:14.600316 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:36:18 crc kubenswrapper[4805]: I0216 21:36:18.601242 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:36:18 crc kubenswrapper[4805]: E0216 21:36:18.696169 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:36:18 crc kubenswrapper[4805]: E0216 21:36:18.696230 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:36:18 crc kubenswrapper[4805]: E0216 21:36:18.696396 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:36:18 crc kubenswrapper[4805]: E0216 21:36:18.697579 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:36:20 crc kubenswrapper[4805]: E0216 21:36:20.601403 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:36:28 crc kubenswrapper[4805]: I0216 21:36:28.597973 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:36:28 crc kubenswrapper[4805]: E0216 21:36:28.598924 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:36:29 crc kubenswrapper[4805]: E0216 21:36:29.611293 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:36:32 crc kubenswrapper[4805]: E0216 21:36:32.600907 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:36:40 crc kubenswrapper[4805]: I0216 21:36:40.598947 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:36:40 crc kubenswrapper[4805]: E0216 21:36:40.600126 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:36:42 crc kubenswrapper[4805]: E0216 21:36:42.602153 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:36:45 crc kubenswrapper[4805]: E0216 21:36:45.730804 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:36:45 crc kubenswrapper[4805]: E0216 21:36:45.731138 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:36:45 crc kubenswrapper[4805]: E0216 21:36:45.731262 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:36:45 crc kubenswrapper[4805]: E0216 21:36:45.732462 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:36:54 crc kubenswrapper[4805]: I0216 21:36:54.599878 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:36:54 crc kubenswrapper[4805]: E0216 21:36:54.601100 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:36:56 crc kubenswrapper[4805]: E0216 21:36:56.600024 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:36:57 crc kubenswrapper[4805]: E0216 21:36:57.600294 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:37:05 crc kubenswrapper[4805]: I0216 21:37:05.599181 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:37:05 crc kubenswrapper[4805]: E0216 21:37:05.600475 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:37:08 crc kubenswrapper[4805]: E0216 21:37:08.601827 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:37:11 crc kubenswrapper[4805]: E0216 21:37:11.600536 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:37:19 crc kubenswrapper[4805]: I0216 21:37:19.598209 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:37:19 crc kubenswrapper[4805]: E0216 21:37:19.599591 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:37:20 crc kubenswrapper[4805]: E0216 21:37:20.600912 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:37:22 crc kubenswrapper[4805]: E0216 21:37:22.599592 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:37:32 crc kubenswrapper[4805]: I0216 21:37:32.598634 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:37:32 crc kubenswrapper[4805]: E0216 21:37:32.599909 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:37:34 crc kubenswrapper[4805]: E0216 21:37:34.603403 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:37:36 crc kubenswrapper[4805]: E0216 21:37:36.601323 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:37:44 crc kubenswrapper[4805]: I0216 21:37:44.598544 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:37:45 crc kubenswrapper[4805]: I0216 21:37:45.610898 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"e7aab641f1074349faea491ff7070bb250cc46b4fa5994780dd852f3f88eb092"} Feb 16 21:37:47 crc kubenswrapper[4805]: E0216 21:37:47.605784 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:37:51 crc kubenswrapper[4805]: E0216 21:37:51.601438 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:38:02 crc kubenswrapper[4805]: E0216 21:38:02.600784 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:38:04 crc kubenswrapper[4805]: E0216 21:38:04.601976 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:38:14 crc kubenswrapper[4805]: E0216 21:38:14.599885 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:38:15 crc kubenswrapper[4805]: I0216 21:38:15.946916 4805 generic.go:334] "Generic (PLEG): container finished" podID="9a751413-e386-4261-bcb7-830a111a4399" containerID="5f17ac527471914f22bade86c1dd14c3c8053294743b33ebced76a265ab0c6ff" exitCode=2 Feb 16 21:38:15 crc kubenswrapper[4805]: I0216 21:38:15.946999 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" event={"ID":"9a751413-e386-4261-bcb7-830a111a4399","Type":"ContainerDied","Data":"5f17ac527471914f22bade86c1dd14c3c8053294743b33ebced76a265ab0c6ff"} Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.487855 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:38:17 crc kubenswrapper[4805]: E0216 21:38:17.599521 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.663095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-inventory\") pod \"9a751413-e386-4261-bcb7-830a111a4399\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.663207 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-ssh-key-openstack-edpm-ipam\") pod \"9a751413-e386-4261-bcb7-830a111a4399\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.663278 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8jpm\" (UniqueName: \"kubernetes.io/projected/9a751413-e386-4261-bcb7-830a111a4399-kube-api-access-c8jpm\") pod \"9a751413-e386-4261-bcb7-830a111a4399\" (UID: \"9a751413-e386-4261-bcb7-830a111a4399\") " Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.675994 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a751413-e386-4261-bcb7-830a111a4399-kube-api-access-c8jpm" (OuterVolumeSpecName: "kube-api-access-c8jpm") pod "9a751413-e386-4261-bcb7-830a111a4399" (UID: "9a751413-e386-4261-bcb7-830a111a4399"). InnerVolumeSpecName "kube-api-access-c8jpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.698712 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a751413-e386-4261-bcb7-830a111a4399" (UID: "9a751413-e386-4261-bcb7-830a111a4399"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.699219 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-inventory" (OuterVolumeSpecName: "inventory") pod "9a751413-e386-4261-bcb7-830a111a4399" (UID: "9a751413-e386-4261-bcb7-830a111a4399"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.767150 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.767880 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a751413-e386-4261-bcb7-830a111a4399-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.768058 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8jpm\" (UniqueName: \"kubernetes.io/projected/9a751413-e386-4261-bcb7-830a111a4399-kube-api-access-c8jpm\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.969855 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" event={"ID":"9a751413-e386-4261-bcb7-830a111a4399","Type":"ContainerDied","Data":"ca3856806343cd8fc232dce2be157f1d753b1d74068e25588b27da5cf87a788b"} Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.970145 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca3856806343cd8fc232dce2be157f1d753b1d74068e25588b27da5cf87a788b" Feb 16 21:38:17 crc kubenswrapper[4805]: I0216 21:38:17.969910 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g" Feb 16 21:38:25 crc kubenswrapper[4805]: E0216 21:38:25.601102 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:38:30 crc kubenswrapper[4805]: E0216 21:38:30.600437 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.038379 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm"] Feb 16 21:38:35 crc kubenswrapper[4805]: E0216 21:38:35.039619 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerName="extract-utilities" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.039775 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerName="extract-utilities" Feb 16 21:38:35 crc kubenswrapper[4805]: E0216 21:38:35.039804 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerName="extract-content" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.039814 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerName="extract-content" Feb 16 21:38:35 crc kubenswrapper[4805]: E0216 21:38:35.039852 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerName="registry-server" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.039865 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerName="registry-server" Feb 16 21:38:35 crc kubenswrapper[4805]: E0216 21:38:35.039906 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a751413-e386-4261-bcb7-830a111a4399" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.039918 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a751413-e386-4261-bcb7-830a111a4399" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.040262 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1e7203-5404-4a0a-97f4-13742b74c14d" containerName="registry-server" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.040290 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a751413-e386-4261-bcb7-830a111a4399" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.041581 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.044948 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.045061 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.047009 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.051138 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.068410 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm"] Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.151075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhbqf\" (UniqueName: \"kubernetes.io/projected/92a3c856-2ffd-4e1b-9178-81719ac447f5-kube-api-access-fhbqf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.151180 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.151279 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.253424 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.253925 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhbqf\" (UniqueName: \"kubernetes.io/projected/92a3c856-2ffd-4e1b-9178-81719ac447f5-kube-api-access-fhbqf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.254146 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.260643 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.262638 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.277083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhbqf\" (UniqueName: \"kubernetes.io/projected/92a3c856-2ffd-4e1b-9178-81719ac447f5-kube-api-access-fhbqf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.370475 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:38:35 crc kubenswrapper[4805]: I0216 21:38:35.929519 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm"] Feb 16 21:38:35 crc kubenswrapper[4805]: W0216 21:38:35.933193 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92a3c856_2ffd_4e1b_9178_81719ac447f5.slice/crio-e6ba381a1f109a34928e42ee6dc7e9bd76d591d4f1e273051a15d76244eb50f8 WatchSource:0}: Error finding container e6ba381a1f109a34928e42ee6dc7e9bd76d591d4f1e273051a15d76244eb50f8: Status 404 returned error can't find the container with id e6ba381a1f109a34928e42ee6dc7e9bd76d591d4f1e273051a15d76244eb50f8 Feb 16 21:38:36 crc kubenswrapper[4805]: I0216 21:38:36.155218 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" event={"ID":"92a3c856-2ffd-4e1b-9178-81719ac447f5","Type":"ContainerStarted","Data":"e6ba381a1f109a34928e42ee6dc7e9bd76d591d4f1e273051a15d76244eb50f8"} Feb 16 21:38:37 crc kubenswrapper[4805]: I0216 21:38:37.168483 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" event={"ID":"92a3c856-2ffd-4e1b-9178-81719ac447f5","Type":"ContainerStarted","Data":"6fdb02ccc4af5f2442eed7387590934b59a9473707c6b23818428210e5f2a544"} Feb 16 21:38:37 crc kubenswrapper[4805]: I0216 21:38:37.191148 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" podStartSLOduration=1.69623725 podStartE2EDuration="2.191126499s" podCreationTimestamp="2026-02-16 21:38:35 +0000 UTC" firstStartedPulling="2026-02-16 21:38:35.935468585 +0000 UTC m=+2533.754151890" lastFinishedPulling="2026-02-16 21:38:36.430357824 +0000 UTC m=+2534.249041139" observedRunningTime="2026-02-16 21:38:37.184368916 +0000 UTC m=+2535.003052301" watchObservedRunningTime="2026-02-16 21:38:37.191126499 +0000 UTC m=+2535.009809794" Feb 16 21:38:37 crc kubenswrapper[4805]: E0216 21:38:37.601149 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:38:43 crc kubenswrapper[4805]: E0216 21:38:43.608119 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:38:51 crc kubenswrapper[4805]: E0216 21:38:51.601459 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:38:55 crc kubenswrapper[4805]: E0216 21:38:55.600981 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:39:03 crc kubenswrapper[4805]: E0216 21:39:03.600035 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:39:06 crc kubenswrapper[4805]: E0216 21:39:06.600324 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:39:15 crc kubenswrapper[4805]: E0216 21:39:15.600361 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:39:17 crc kubenswrapper[4805]: E0216 21:39:17.599667 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:39:26 crc kubenswrapper[4805]: E0216 21:39:26.603491 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:39:29 crc kubenswrapper[4805]: E0216 21:39:29.600990 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:39:37 crc kubenswrapper[4805]: E0216 21:39:37.601228 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:39:43 crc kubenswrapper[4805]: E0216 21:39:43.617640 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:39:48 crc kubenswrapper[4805]: E0216 21:39:48.601262 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:39:54 crc kubenswrapper[4805]: E0216 21:39:54.604386 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:40:01 crc kubenswrapper[4805]: E0216 21:40:01.603286 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:40:08 crc kubenswrapper[4805]: I0216 21:40:08.100035 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:40:08 crc kubenswrapper[4805]: I0216 21:40:08.100624 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:40:09 crc kubenswrapper[4805]: E0216 21:40:09.601320 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:40:12 crc kubenswrapper[4805]: I0216 21:40:12.936932 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7xsw2"] Feb 16 21:40:12 crc kubenswrapper[4805]: I0216 21:40:12.940306 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:12 crc kubenswrapper[4805]: I0216 21:40:12.954584 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7xsw2"] Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.061271 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-catalog-content\") pod \"redhat-operators-7xsw2\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.062154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4dhw\" (UniqueName: \"kubernetes.io/projected/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-kube-api-access-l4dhw\") pod \"redhat-operators-7xsw2\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.062267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-utilities\") pod \"redhat-operators-7xsw2\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.165456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-catalog-content\") pod \"redhat-operators-7xsw2\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.165699 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4dhw\" (UniqueName: \"kubernetes.io/projected/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-kube-api-access-l4dhw\") pod \"redhat-operators-7xsw2\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.165746 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-utilities\") pod \"redhat-operators-7xsw2\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.166389 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-utilities\") pod \"redhat-operators-7xsw2\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.169636 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-catalog-content\") pod \"redhat-operators-7xsw2\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.189020 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4dhw\" (UniqueName: \"kubernetes.io/projected/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-kube-api-access-l4dhw\") pod \"redhat-operators-7xsw2\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.276645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:13 crc kubenswrapper[4805]: E0216 21:40:13.610925 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:40:13 crc kubenswrapper[4805]: I0216 21:40:13.843449 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7xsw2"] Feb 16 21:40:14 crc kubenswrapper[4805]: I0216 21:40:14.295352 4805 generic.go:334] "Generic (PLEG): container finished" podID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerID="d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781" exitCode=0 Feb 16 21:40:14 crc kubenswrapper[4805]: I0216 21:40:14.295577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xsw2" event={"ID":"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7","Type":"ContainerDied","Data":"d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781"} Feb 16 21:40:14 crc kubenswrapper[4805]: I0216 21:40:14.295653 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xsw2" event={"ID":"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7","Type":"ContainerStarted","Data":"74104d044fb0480f338b72a9a0efbafd994c31d5fbda77e13075ff780f04983e"} Feb 16 21:40:15 crc kubenswrapper[4805]: I0216 21:40:15.312010 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xsw2" event={"ID":"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7","Type":"ContainerStarted","Data":"c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775"} Feb 16 21:40:20 crc kubenswrapper[4805]: I0216 21:40:20.365479 4805 generic.go:334] "Generic (PLEG): container finished" podID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerID="c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775" exitCode=0 Feb 16 21:40:20 crc kubenswrapper[4805]: I0216 21:40:20.365562 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xsw2" event={"ID":"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7","Type":"ContainerDied","Data":"c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775"} Feb 16 21:40:22 crc kubenswrapper[4805]: I0216 21:40:22.393113 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xsw2" event={"ID":"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7","Type":"ContainerStarted","Data":"86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e"} Feb 16 21:40:22 crc kubenswrapper[4805]: I0216 21:40:22.422495 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7xsw2" podStartSLOduration=3.643626921 podStartE2EDuration="10.42244783s" podCreationTimestamp="2026-02-16 21:40:12 +0000 UTC" firstStartedPulling="2026-02-16 21:40:14.297680746 +0000 UTC m=+2632.116364041" lastFinishedPulling="2026-02-16 21:40:21.076501655 +0000 UTC m=+2638.895184950" observedRunningTime="2026-02-16 21:40:22.410035825 +0000 UTC m=+2640.228719130" watchObservedRunningTime="2026-02-16 21:40:22.42244783 +0000 UTC m=+2640.241131125" Feb 16 21:40:23 crc kubenswrapper[4805]: I0216 21:40:23.278155 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:23 crc kubenswrapper[4805]: I0216 21:40:23.278483 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:23 crc kubenswrapper[4805]: E0216 21:40:23.606929 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:40:24 crc kubenswrapper[4805]: I0216 21:40:24.341327 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7xsw2" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="registry-server" probeResult="failure" output=< Feb 16 21:40:24 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:40:24 crc kubenswrapper[4805]: > Feb 16 21:40:26 crc kubenswrapper[4805]: E0216 21:40:26.602072 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:40:34 crc kubenswrapper[4805]: I0216 21:40:34.329468 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7xsw2" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="registry-server" probeResult="failure" output=< Feb 16 21:40:34 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:40:34 crc kubenswrapper[4805]: > Feb 16 21:40:35 crc kubenswrapper[4805]: E0216 21:40:35.601251 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:40:37 crc kubenswrapper[4805]: E0216 21:40:37.602000 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:40:38 crc kubenswrapper[4805]: I0216 21:40:38.099561 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:40:38 crc kubenswrapper[4805]: I0216 21:40:38.099622 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:40:44 crc kubenswrapper[4805]: I0216 21:40:44.328093 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7xsw2" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="registry-server" probeResult="failure" output=< Feb 16 21:40:44 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:40:44 crc kubenswrapper[4805]: > Feb 16 21:40:49 crc kubenswrapper[4805]: E0216 21:40:49.599702 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:40:51 crc kubenswrapper[4805]: E0216 21:40:51.601408 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:40:53 crc kubenswrapper[4805]: I0216 21:40:53.349956 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:53 crc kubenswrapper[4805]: I0216 21:40:53.409201 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:53 crc kubenswrapper[4805]: I0216 21:40:53.624449 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7xsw2"] Feb 16 21:40:54 crc kubenswrapper[4805]: I0216 21:40:54.755785 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7xsw2" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="registry-server" containerID="cri-o://86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e" gracePeriod=2 Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.313415 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.448877 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4dhw\" (UniqueName: \"kubernetes.io/projected/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-kube-api-access-l4dhw\") pod \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.448962 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-utilities\") pod \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.449516 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-catalog-content\") pod \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\" (UID: \"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7\") " Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.451255 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-utilities" (OuterVolumeSpecName: "utilities") pod "76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" (UID: "76f33bcb-a2a6-47e9-a7c5-578d27b18ad7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.454595 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.456660 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-kube-api-access-l4dhw" (OuterVolumeSpecName: "kube-api-access-l4dhw") pod "76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" (UID: "76f33bcb-a2a6-47e9-a7c5-578d27b18ad7"). InnerVolumeSpecName "kube-api-access-l4dhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.556746 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4dhw\" (UniqueName: \"kubernetes.io/projected/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-kube-api-access-l4dhw\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.591651 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" (UID: "76f33bcb-a2a6-47e9-a7c5-578d27b18ad7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.658625 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.768664 4805 generic.go:334] "Generic (PLEG): container finished" podID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerID="86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e" exitCode=0 Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.768713 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xsw2" event={"ID":"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7","Type":"ContainerDied","Data":"86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e"} Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.768749 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xsw2" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.769989 4805 scope.go:117] "RemoveContainer" containerID="86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.769890 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xsw2" event={"ID":"76f33bcb-a2a6-47e9-a7c5-578d27b18ad7","Type":"ContainerDied","Data":"74104d044fb0480f338b72a9a0efbafd994c31d5fbda77e13075ff780f04983e"} Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.801417 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7xsw2"] Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.808805 4805 scope.go:117] "RemoveContainer" containerID="c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.814242 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7xsw2"] Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.836966 4805 scope.go:117] "RemoveContainer" containerID="d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.892365 4805 scope.go:117] "RemoveContainer" containerID="86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e" Feb 16 21:40:55 crc kubenswrapper[4805]: E0216 21:40:55.892829 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e\": container with ID starting with 86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e not found: ID does not exist" containerID="86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.892878 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e"} err="failed to get container status \"86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e\": rpc error: code = NotFound desc = could not find container \"86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e\": container with ID starting with 86c7e832713fa7d8135b1589f4dbad7f10ce026824e44b10bd53acaffb7d720e not found: ID does not exist" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.892903 4805 scope.go:117] "RemoveContainer" containerID="c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775" Feb 16 21:40:55 crc kubenswrapper[4805]: E0216 21:40:55.893218 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775\": container with ID starting with c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775 not found: ID does not exist" containerID="c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.893249 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775"} err="failed to get container status \"c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775\": rpc error: code = NotFound desc = could not find container \"c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775\": container with ID starting with c06164c7fc8d5bd150ea9c6f312102365115de5e75df79de5d3a4b9604c07775 not found: ID does not exist" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.893270 4805 scope.go:117] "RemoveContainer" containerID="d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781" Feb 16 21:40:55 crc kubenswrapper[4805]: E0216 21:40:55.893612 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781\": container with ID starting with d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781 not found: ID does not exist" containerID="d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781" Feb 16 21:40:55 crc kubenswrapper[4805]: I0216 21:40:55.893632 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781"} err="failed to get container status \"d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781\": rpc error: code = NotFound desc = could not find container \"d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781\": container with ID starting with d663f0486f9f7ec7e81211802388c2f99830500667ce792699cfb13da0041781 not found: ID does not exist" Feb 16 21:40:57 crc kubenswrapper[4805]: I0216 21:40:57.612882 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" path="/var/lib/kubelet/pods/76f33bcb-a2a6-47e9-a7c5-578d27b18ad7/volumes" Feb 16 21:41:02 crc kubenswrapper[4805]: E0216 21:41:02.599902 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:41:02 crc kubenswrapper[4805]: E0216 21:41:02.599973 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:41:08 crc kubenswrapper[4805]: I0216 21:41:08.100145 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:41:08 crc kubenswrapper[4805]: I0216 21:41:08.101812 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:41:08 crc kubenswrapper[4805]: I0216 21:41:08.101971 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:41:08 crc kubenswrapper[4805]: I0216 21:41:08.102921 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7aab641f1074349faea491ff7070bb250cc46b4fa5994780dd852f3f88eb092"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:41:08 crc kubenswrapper[4805]: I0216 21:41:08.103065 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://e7aab641f1074349faea491ff7070bb250cc46b4fa5994780dd852f3f88eb092" gracePeriod=600 Feb 16 21:41:08 crc kubenswrapper[4805]: I0216 21:41:08.948168 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="e7aab641f1074349faea491ff7070bb250cc46b4fa5994780dd852f3f88eb092" exitCode=0 Feb 16 21:41:08 crc kubenswrapper[4805]: I0216 21:41:08.948259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"e7aab641f1074349faea491ff7070bb250cc46b4fa5994780dd852f3f88eb092"} Feb 16 21:41:08 crc kubenswrapper[4805]: I0216 21:41:08.948806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459"} Feb 16 21:41:08 crc kubenswrapper[4805]: I0216 21:41:08.948840 4805 scope.go:117] "RemoveContainer" containerID="91aa37e503c8e836c8988138cc85245997e320986f82bffa38b37628036f3bac" Feb 16 21:41:15 crc kubenswrapper[4805]: E0216 21:41:15.600392 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:41:17 crc kubenswrapper[4805]: E0216 21:41:17.602232 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:41:28 crc kubenswrapper[4805]: I0216 21:41:28.600582 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:41:28 crc kubenswrapper[4805]: E0216 21:41:28.709216 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:41:28 crc kubenswrapper[4805]: E0216 21:41:28.709281 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:41:28 crc kubenswrapper[4805]: E0216 21:41:28.709405 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:41:28 crc kubenswrapper[4805]: E0216 21:41:28.710980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:41:30 crc kubenswrapper[4805]: E0216 21:41:30.600784 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:41:41 crc kubenswrapper[4805]: E0216 21:41:41.601603 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:41:44 crc kubenswrapper[4805]: E0216 21:41:44.599750 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:41:55 crc kubenswrapper[4805]: E0216 21:41:55.601359 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:41:55 crc kubenswrapper[4805]: E0216 21:41:55.719293 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:41:55 crc kubenswrapper[4805]: E0216 21:41:55.719778 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:41:55 crc kubenswrapper[4805]: E0216 21:41:55.719987 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:41:55 crc kubenswrapper[4805]: E0216 21:41:55.721220 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:42:08 crc kubenswrapper[4805]: E0216 21:42:08.600348 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:42:09 crc kubenswrapper[4805]: E0216 21:42:09.601035 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:42:19 crc kubenswrapper[4805]: E0216 21:42:19.601433 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:42:24 crc kubenswrapper[4805]: E0216 21:42:24.600571 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:42:31 crc kubenswrapper[4805]: E0216 21:42:31.599878 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:42:37 crc kubenswrapper[4805]: E0216 21:42:37.602746 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:42:44 crc kubenswrapper[4805]: E0216 21:42:44.600320 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:42:52 crc kubenswrapper[4805]: E0216 21:42:52.600189 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:42:55 crc kubenswrapper[4805]: E0216 21:42:55.626762 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:43:05 crc kubenswrapper[4805]: E0216 21:43:05.602068 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:43:08 crc kubenswrapper[4805]: I0216 21:43:08.099393 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:43:08 crc kubenswrapper[4805]: I0216 21:43:08.100059 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:43:09 crc kubenswrapper[4805]: E0216 21:43:09.601424 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:43:20 crc kubenswrapper[4805]: E0216 21:43:20.602100 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.283822 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x7f6s"] Feb 16 21:43:22 crc kubenswrapper[4805]: E0216 21:43:22.286401 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="extract-utilities" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.286582 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="extract-utilities" Feb 16 21:43:22 crc kubenswrapper[4805]: E0216 21:43:22.286768 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="registry-server" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.286912 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="registry-server" Feb 16 21:43:22 crc kubenswrapper[4805]: E0216 21:43:22.287059 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="extract-content" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.287182 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="extract-content" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.287864 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="76f33bcb-a2a6-47e9-a7c5-578d27b18ad7" containerName="registry-server" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.291793 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.300630 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7f6s"] Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.442498 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-utilities\") pod \"redhat-marketplace-x7f6s\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.442949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-catalog-content\") pod \"redhat-marketplace-x7f6s\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.443026 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg2pz\" (UniqueName: \"kubernetes.io/projected/a30c5122-bbdb-44ce-be4a-028705f92096-kube-api-access-rg2pz\") pod \"redhat-marketplace-x7f6s\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.545520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-utilities\") pod \"redhat-marketplace-x7f6s\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.545597 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-catalog-content\") pod \"redhat-marketplace-x7f6s\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.545643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2pz\" (UniqueName: \"kubernetes.io/projected/a30c5122-bbdb-44ce-be4a-028705f92096-kube-api-access-rg2pz\") pod \"redhat-marketplace-x7f6s\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.546497 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-catalog-content\") pod \"redhat-marketplace-x7f6s\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.546596 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-utilities\") pod \"redhat-marketplace-x7f6s\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.567808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg2pz\" (UniqueName: \"kubernetes.io/projected/a30c5122-bbdb-44ce-be4a-028705f92096-kube-api-access-rg2pz\") pod \"redhat-marketplace-x7f6s\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:22 crc kubenswrapper[4805]: I0216 21:43:22.628269 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:23 crc kubenswrapper[4805]: I0216 21:43:23.164921 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7f6s"] Feb 16 21:43:23 crc kubenswrapper[4805]: I0216 21:43:23.408751 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7f6s" event={"ID":"a30c5122-bbdb-44ce-be4a-028705f92096","Type":"ContainerStarted","Data":"f5365de287198eb3c8c26ccdd3ccf2fb626481b50b39ce83c97c014e80160daa"} Feb 16 21:43:24 crc kubenswrapper[4805]: I0216 21:43:24.432048 4805 generic.go:334] "Generic (PLEG): container finished" podID="a30c5122-bbdb-44ce-be4a-028705f92096" containerID="7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4" exitCode=0 Feb 16 21:43:24 crc kubenswrapper[4805]: I0216 21:43:24.432180 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7f6s" event={"ID":"a30c5122-bbdb-44ce-be4a-028705f92096","Type":"ContainerDied","Data":"7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4"} Feb 16 21:43:24 crc kubenswrapper[4805]: E0216 21:43:24.599330 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:43:25 crc kubenswrapper[4805]: I0216 21:43:25.450594 4805 generic.go:334] "Generic (PLEG): container finished" podID="a30c5122-bbdb-44ce-be4a-028705f92096" containerID="0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e" exitCode=0 Feb 16 21:43:25 crc kubenswrapper[4805]: I0216 21:43:25.450703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7f6s" event={"ID":"a30c5122-bbdb-44ce-be4a-028705f92096","Type":"ContainerDied","Data":"0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e"} Feb 16 21:43:26 crc kubenswrapper[4805]: I0216 21:43:26.462953 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7f6s" event={"ID":"a30c5122-bbdb-44ce-be4a-028705f92096","Type":"ContainerStarted","Data":"6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a"} Feb 16 21:43:26 crc kubenswrapper[4805]: I0216 21:43:26.484639 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x7f6s" podStartSLOduration=3.08375449 podStartE2EDuration="4.484621681s" podCreationTimestamp="2026-02-16 21:43:22 +0000 UTC" firstStartedPulling="2026-02-16 21:43:24.436328484 +0000 UTC m=+2822.255011789" lastFinishedPulling="2026-02-16 21:43:25.837195685 +0000 UTC m=+2823.655878980" observedRunningTime="2026-02-16 21:43:26.478071654 +0000 UTC m=+2824.296754949" watchObservedRunningTime="2026-02-16 21:43:26.484621681 +0000 UTC m=+2824.303304966" Feb 16 21:43:32 crc kubenswrapper[4805]: I0216 21:43:32.629114 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:32 crc kubenswrapper[4805]: I0216 21:43:32.630280 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:32 crc kubenswrapper[4805]: I0216 21:43:32.691183 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:33 crc kubenswrapper[4805]: I0216 21:43:33.622143 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:33 crc kubenswrapper[4805]: I0216 21:43:33.677943 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7f6s"] Feb 16 21:43:35 crc kubenswrapper[4805]: I0216 21:43:35.555564 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x7f6s" podUID="a30c5122-bbdb-44ce-be4a-028705f92096" containerName="registry-server" containerID="cri-o://6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a" gracePeriod=2 Feb 16 21:43:35 crc kubenswrapper[4805]: E0216 21:43:35.600872 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.040250 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.094051 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg2pz\" (UniqueName: \"kubernetes.io/projected/a30c5122-bbdb-44ce-be4a-028705f92096-kube-api-access-rg2pz\") pod \"a30c5122-bbdb-44ce-be4a-028705f92096\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.094101 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-catalog-content\") pod \"a30c5122-bbdb-44ce-be4a-028705f92096\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.094364 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-utilities\") pod \"a30c5122-bbdb-44ce-be4a-028705f92096\" (UID: \"a30c5122-bbdb-44ce-be4a-028705f92096\") " Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.095110 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-utilities" (OuterVolumeSpecName: "utilities") pod "a30c5122-bbdb-44ce-be4a-028705f92096" (UID: "a30c5122-bbdb-44ce-be4a-028705f92096"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.106432 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30c5122-bbdb-44ce-be4a-028705f92096-kube-api-access-rg2pz" (OuterVolumeSpecName: "kube-api-access-rg2pz") pod "a30c5122-bbdb-44ce-be4a-028705f92096" (UID: "a30c5122-bbdb-44ce-be4a-028705f92096"). InnerVolumeSpecName "kube-api-access-rg2pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.129816 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a30c5122-bbdb-44ce-be4a-028705f92096" (UID: "a30c5122-bbdb-44ce-be4a-028705f92096"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.198712 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.198797 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg2pz\" (UniqueName: \"kubernetes.io/projected/a30c5122-bbdb-44ce-be4a-028705f92096-kube-api-access-rg2pz\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.198810 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30c5122-bbdb-44ce-be4a-028705f92096-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.581297 4805 generic.go:334] "Generic (PLEG): container finished" podID="a30c5122-bbdb-44ce-be4a-028705f92096" containerID="6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a" exitCode=0 Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.581393 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7f6s" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.581430 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7f6s" event={"ID":"a30c5122-bbdb-44ce-be4a-028705f92096","Type":"ContainerDied","Data":"6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a"} Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.582989 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7f6s" event={"ID":"a30c5122-bbdb-44ce-be4a-028705f92096","Type":"ContainerDied","Data":"f5365de287198eb3c8c26ccdd3ccf2fb626481b50b39ce83c97c014e80160daa"} Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.583025 4805 scope.go:117] "RemoveContainer" containerID="6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.619298 4805 scope.go:117] "RemoveContainer" containerID="0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.648902 4805 scope.go:117] "RemoveContainer" containerID="7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.649992 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7f6s"] Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.661432 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7f6s"] Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.716257 4805 scope.go:117] "RemoveContainer" containerID="6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a" Feb 16 21:43:36 crc kubenswrapper[4805]: E0216 21:43:36.716730 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a\": container with ID starting with 6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a not found: ID does not exist" containerID="6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.716781 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a"} err="failed to get container status \"6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a\": rpc error: code = NotFound desc = could not find container \"6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a\": container with ID starting with 6a24173712625d6c539c0672f981f6aa050887cbdbda70b82e48b9fa9894648a not found: ID does not exist" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.716841 4805 scope.go:117] "RemoveContainer" containerID="0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e" Feb 16 21:43:36 crc kubenswrapper[4805]: E0216 21:43:36.717223 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e\": container with ID starting with 0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e not found: ID does not exist" containerID="0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.717281 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e"} err="failed to get container status \"0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e\": rpc error: code = NotFound desc = could not find container \"0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e\": container with ID starting with 0083e3d1ff1cf7091812e269c70485fcf62a04a5616a22f9c552163d8e66630e not found: ID does not exist" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.717304 4805 scope.go:117] "RemoveContainer" containerID="7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4" Feb 16 21:43:36 crc kubenswrapper[4805]: E0216 21:43:36.717563 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4\": container with ID starting with 7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4 not found: ID does not exist" containerID="7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4" Feb 16 21:43:36 crc kubenswrapper[4805]: I0216 21:43:36.717582 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4"} err="failed to get container status \"7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4\": rpc error: code = NotFound desc = could not find container \"7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4\": container with ID starting with 7d386e9849a88d07835d128f094093d37b820a44187b383e99abe4c0424868e4 not found: ID does not exist" Feb 16 21:43:37 crc kubenswrapper[4805]: I0216 21:43:37.614033 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a30c5122-bbdb-44ce-be4a-028705f92096" path="/var/lib/kubelet/pods/a30c5122-bbdb-44ce-be4a-028705f92096/volumes" Feb 16 21:43:38 crc kubenswrapper[4805]: I0216 21:43:38.099392 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:43:38 crc kubenswrapper[4805]: I0216 21:43:38.099462 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:43:39 crc kubenswrapper[4805]: E0216 21:43:39.602028 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:43:49 crc kubenswrapper[4805]: E0216 21:43:49.608596 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:43:51 crc kubenswrapper[4805]: E0216 21:43:51.602243 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:44:03 crc kubenswrapper[4805]: E0216 21:44:03.609062 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:44:04 crc kubenswrapper[4805]: E0216 21:44:04.600677 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:44:08 crc kubenswrapper[4805]: I0216 21:44:08.099795 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:44:08 crc kubenswrapper[4805]: I0216 21:44:08.100536 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:44:08 crc kubenswrapper[4805]: I0216 21:44:08.100613 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:44:08 crc kubenswrapper[4805]: I0216 21:44:08.102218 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:44:08 crc kubenswrapper[4805]: I0216 21:44:08.102368 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" gracePeriod=600 Feb 16 21:44:08 crc kubenswrapper[4805]: E0216 21:44:08.264502 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:44:08 crc kubenswrapper[4805]: I0216 21:44:08.622925 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" exitCode=0 Feb 16 21:44:08 crc kubenswrapper[4805]: I0216 21:44:08.622985 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459"} Feb 16 21:44:08 crc kubenswrapper[4805]: I0216 21:44:08.623951 4805 scope.go:117] "RemoveContainer" containerID="e7aab641f1074349faea491ff7070bb250cc46b4fa5994780dd852f3f88eb092" Feb 16 21:44:08 crc kubenswrapper[4805]: I0216 21:44:08.625338 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:44:08 crc kubenswrapper[4805]: E0216 21:44:08.625849 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:44:16 crc kubenswrapper[4805]: E0216 21:44:16.601264 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:44:19 crc kubenswrapper[4805]: E0216 21:44:19.602154 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:44:23 crc kubenswrapper[4805]: I0216 21:44:23.609193 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:44:23 crc kubenswrapper[4805]: E0216 21:44:23.610134 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:44:29 crc kubenswrapper[4805]: E0216 21:44:29.600939 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:44:30 crc kubenswrapper[4805]: E0216 21:44:30.600911 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:44:36 crc kubenswrapper[4805]: I0216 21:44:36.598256 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:44:36 crc kubenswrapper[4805]: E0216 21:44:36.599446 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:44:42 crc kubenswrapper[4805]: E0216 21:44:42.602056 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:44:44 crc kubenswrapper[4805]: E0216 21:44:44.602430 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:44:47 crc kubenswrapper[4805]: I0216 21:44:47.598727 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:44:47 crc kubenswrapper[4805]: E0216 21:44:47.599403 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:44:52 crc kubenswrapper[4805]: I0216 21:44:52.196964 4805 generic.go:334] "Generic (PLEG): container finished" podID="92a3c856-2ffd-4e1b-9178-81719ac447f5" containerID="6fdb02ccc4af5f2442eed7387590934b59a9473707c6b23818428210e5f2a544" exitCode=2 Feb 16 21:44:52 crc kubenswrapper[4805]: I0216 21:44:52.197072 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" event={"ID":"92a3c856-2ffd-4e1b-9178-81719ac447f5","Type":"ContainerDied","Data":"6fdb02ccc4af5f2442eed7387590934b59a9473707c6b23818428210e5f2a544"} Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.729755 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.852130 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-inventory\") pod \"92a3c856-2ffd-4e1b-9178-81719ac447f5\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.852463 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-ssh-key-openstack-edpm-ipam\") pod \"92a3c856-2ffd-4e1b-9178-81719ac447f5\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.852563 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhbqf\" (UniqueName: \"kubernetes.io/projected/92a3c856-2ffd-4e1b-9178-81719ac447f5-kube-api-access-fhbqf\") pod \"92a3c856-2ffd-4e1b-9178-81719ac447f5\" (UID: \"92a3c856-2ffd-4e1b-9178-81719ac447f5\") " Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.862153 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a3c856-2ffd-4e1b-9178-81719ac447f5-kube-api-access-fhbqf" (OuterVolumeSpecName: "kube-api-access-fhbqf") pod "92a3c856-2ffd-4e1b-9178-81719ac447f5" (UID: "92a3c856-2ffd-4e1b-9178-81719ac447f5"). InnerVolumeSpecName "kube-api-access-fhbqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.913026 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92a3c856-2ffd-4e1b-9178-81719ac447f5" (UID: "92a3c856-2ffd-4e1b-9178-81719ac447f5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.914932 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-inventory" (OuterVolumeSpecName: "inventory") pod "92a3c856-2ffd-4e1b-9178-81719ac447f5" (UID: "92a3c856-2ffd-4e1b-9178-81719ac447f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.955623 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.955673 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhbqf\" (UniqueName: \"kubernetes.io/projected/92a3c856-2ffd-4e1b-9178-81719ac447f5-kube-api-access-fhbqf\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:53 crc kubenswrapper[4805]: I0216 21:44:53.955687 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92a3c856-2ffd-4e1b-9178-81719ac447f5-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:54 crc kubenswrapper[4805]: I0216 21:44:54.222120 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" event={"ID":"92a3c856-2ffd-4e1b-9178-81719ac447f5","Type":"ContainerDied","Data":"e6ba381a1f109a34928e42ee6dc7e9bd76d591d4f1e273051a15d76244eb50f8"} Feb 16 21:44:54 crc kubenswrapper[4805]: I0216 21:44:54.222179 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ba381a1f109a34928e42ee6dc7e9bd76d591d4f1e273051a15d76244eb50f8" Feb 16 21:44:54 crc kubenswrapper[4805]: I0216 21:44:54.222614 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm" Feb 16 21:44:54 crc kubenswrapper[4805]: E0216 21:44:54.599948 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:44:59 crc kubenswrapper[4805]: E0216 21:44:59.601326 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.180491 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm"] Feb 16 21:45:00 crc kubenswrapper[4805]: E0216 21:45:00.181486 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30c5122-bbdb-44ce-be4a-028705f92096" containerName="extract-content" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.181507 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30c5122-bbdb-44ce-be4a-028705f92096" containerName="extract-content" Feb 16 21:45:00 crc kubenswrapper[4805]: E0216 21:45:00.181526 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30c5122-bbdb-44ce-be4a-028705f92096" containerName="registry-server" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.181534 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30c5122-bbdb-44ce-be4a-028705f92096" containerName="registry-server" Feb 16 21:45:00 crc kubenswrapper[4805]: E0216 21:45:00.181588 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30c5122-bbdb-44ce-be4a-028705f92096" containerName="extract-utilities" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.181598 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30c5122-bbdb-44ce-be4a-028705f92096" containerName="extract-utilities" Feb 16 21:45:00 crc kubenswrapper[4805]: E0216 21:45:00.181609 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a3c856-2ffd-4e1b-9178-81719ac447f5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.181620 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a3c856-2ffd-4e1b-9178-81719ac447f5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.181924 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30c5122-bbdb-44ce-be4a-028705f92096" containerName="registry-server" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.181956 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a3c856-2ffd-4e1b-9178-81719ac447f5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.183127 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.185707 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.186711 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.196636 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm"] Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.331561 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vdjd\" (UniqueName: \"kubernetes.io/projected/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-kube-api-access-9vdjd\") pod \"collect-profiles-29521305-fchzm\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.331625 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-config-volume\") pod \"collect-profiles-29521305-fchzm\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.332566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-secret-volume\") pod \"collect-profiles-29521305-fchzm\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.435116 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-secret-volume\") pod \"collect-profiles-29521305-fchzm\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.435538 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vdjd\" (UniqueName: \"kubernetes.io/projected/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-kube-api-access-9vdjd\") pod \"collect-profiles-29521305-fchzm\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.435606 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-config-volume\") pod \"collect-profiles-29521305-fchzm\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.436649 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-config-volume\") pod \"collect-profiles-29521305-fchzm\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.454650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-secret-volume\") pod \"collect-profiles-29521305-fchzm\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.458949 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vdjd\" (UniqueName: \"kubernetes.io/projected/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-kube-api-access-9vdjd\") pod \"collect-profiles-29521305-fchzm\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:00 crc kubenswrapper[4805]: I0216 21:45:00.518642 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:01 crc kubenswrapper[4805]: I0216 21:45:01.010290 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm"] Feb 16 21:45:01 crc kubenswrapper[4805]: I0216 21:45:01.293169 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" event={"ID":"65a2300b-5c13-4318-a8b8-27ff7dad9fe7","Type":"ContainerStarted","Data":"e32c1f8c5d9291b38f9735491ecb0030aa76d5175d6c7d3d6fef0f8f8911eae4"} Feb 16 21:45:01 crc kubenswrapper[4805]: I0216 21:45:01.293227 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" event={"ID":"65a2300b-5c13-4318-a8b8-27ff7dad9fe7","Type":"ContainerStarted","Data":"0a3dbd4b0aa4b7c13d3a0ef814fa1d45e04a96bc30b58588ccec8e9f7594ea8c"} Feb 16 21:45:01 crc kubenswrapper[4805]: I0216 21:45:01.317445 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" podStartSLOduration=1.317426032 podStartE2EDuration="1.317426032s" podCreationTimestamp="2026-02-16 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:45:01.311202933 +0000 UTC m=+2919.129886218" watchObservedRunningTime="2026-02-16 21:45:01.317426032 +0000 UTC m=+2919.136109317" Feb 16 21:45:01 crc kubenswrapper[4805]: I0216 21:45:01.599597 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:45:01 crc kubenswrapper[4805]: E0216 21:45:01.599903 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:45:02 crc kubenswrapper[4805]: I0216 21:45:02.321072 4805 generic.go:334] "Generic (PLEG): container finished" podID="65a2300b-5c13-4318-a8b8-27ff7dad9fe7" containerID="e32c1f8c5d9291b38f9735491ecb0030aa76d5175d6c7d3d6fef0f8f8911eae4" exitCode=0 Feb 16 21:45:02 crc kubenswrapper[4805]: I0216 21:45:02.321490 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" event={"ID":"65a2300b-5c13-4318-a8b8-27ff7dad9fe7","Type":"ContainerDied","Data":"e32c1f8c5d9291b38f9735491ecb0030aa76d5175d6c7d3d6fef0f8f8911eae4"} Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.716627 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.736648 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-config-volume\") pod \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.737194 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-secret-volume\") pod \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.737283 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vdjd\" (UniqueName: \"kubernetes.io/projected/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-kube-api-access-9vdjd\") pod \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\" (UID: \"65a2300b-5c13-4318-a8b8-27ff7dad9fe7\") " Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.738015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-config-volume" (OuterVolumeSpecName: "config-volume") pod "65a2300b-5c13-4318-a8b8-27ff7dad9fe7" (UID: "65a2300b-5c13-4318-a8b8-27ff7dad9fe7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.740516 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.744707 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "65a2300b-5c13-4318-a8b8-27ff7dad9fe7" (UID: "65a2300b-5c13-4318-a8b8-27ff7dad9fe7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.748640 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-kube-api-access-9vdjd" (OuterVolumeSpecName: "kube-api-access-9vdjd") pod "65a2300b-5c13-4318-a8b8-27ff7dad9fe7" (UID: "65a2300b-5c13-4318-a8b8-27ff7dad9fe7"). InnerVolumeSpecName "kube-api-access-9vdjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.841850 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:03 crc kubenswrapper[4805]: I0216 21:45:03.842144 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vdjd\" (UniqueName: \"kubernetes.io/projected/65a2300b-5c13-4318-a8b8-27ff7dad9fe7-kube-api-access-9vdjd\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:04 crc kubenswrapper[4805]: I0216 21:45:04.346798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" event={"ID":"65a2300b-5c13-4318-a8b8-27ff7dad9fe7","Type":"ContainerDied","Data":"0a3dbd4b0aa4b7c13d3a0ef814fa1d45e04a96bc30b58588ccec8e9f7594ea8c"} Feb 16 21:45:04 crc kubenswrapper[4805]: I0216 21:45:04.346848 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a3dbd4b0aa4b7c13d3a0ef814fa1d45e04a96bc30b58588ccec8e9f7594ea8c" Feb 16 21:45:04 crc kubenswrapper[4805]: I0216 21:45:04.346879 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm" Feb 16 21:45:04 crc kubenswrapper[4805]: I0216 21:45:04.397141 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92"] Feb 16 21:45:04 crc kubenswrapper[4805]: I0216 21:45:04.409788 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-4cf92"] Feb 16 21:45:05 crc kubenswrapper[4805]: I0216 21:45:05.640764 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce801e4-7f25-47d8-8860-e939a652ed28" path="/var/lib/kubelet/pods/bce801e4-7f25-47d8-8860-e939a652ed28/volumes" Feb 16 21:45:08 crc kubenswrapper[4805]: E0216 21:45:08.599794 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:45:10 crc kubenswrapper[4805]: E0216 21:45:10.603331 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:45:12 crc kubenswrapper[4805]: I0216 21:45:12.598625 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:45:12 crc kubenswrapper[4805]: E0216 21:45:12.599547 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.069327 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q2nfd"] Feb 16 21:45:19 crc kubenswrapper[4805]: E0216 21:45:19.070493 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a2300b-5c13-4318-a8b8-27ff7dad9fe7" containerName="collect-profiles" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.070518 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a2300b-5c13-4318-a8b8-27ff7dad9fe7" containerName="collect-profiles" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.070931 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a2300b-5c13-4318-a8b8-27ff7dad9fe7" containerName="collect-profiles" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.073592 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.082931 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2nfd"] Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.206124 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-catalog-content\") pod \"community-operators-q2nfd\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.206403 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-utilities\") pod \"community-operators-q2nfd\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.206595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsk9s\" (UniqueName: \"kubernetes.io/projected/372c9570-8d33-406b-af30-4a98b6a177ce-kube-api-access-qsk9s\") pod \"community-operators-q2nfd\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.308960 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsk9s\" (UniqueName: \"kubernetes.io/projected/372c9570-8d33-406b-af30-4a98b6a177ce-kube-api-access-qsk9s\") pod \"community-operators-q2nfd\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.309410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-catalog-content\") pod \"community-operators-q2nfd\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.309972 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-catalog-content\") pod \"community-operators-q2nfd\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.310186 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-utilities\") pod \"community-operators-q2nfd\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.310432 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-utilities\") pod \"community-operators-q2nfd\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.334511 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsk9s\" (UniqueName: \"kubernetes.io/projected/372c9570-8d33-406b-af30-4a98b6a177ce-kube-api-access-qsk9s\") pod \"community-operators-q2nfd\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:19 crc kubenswrapper[4805]: I0216 21:45:19.429460 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:20 crc kubenswrapper[4805]: I0216 21:45:20.005958 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2nfd"] Feb 16 21:45:20 crc kubenswrapper[4805]: W0216 21:45:20.007848 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod372c9570_8d33_406b_af30_4a98b6a177ce.slice/crio-32c790c88bd5dbd150e1819620ac3c308e59bf7d3a6facaee240aff748c3eafb WatchSource:0}: Error finding container 32c790c88bd5dbd150e1819620ac3c308e59bf7d3a6facaee240aff748c3eafb: Status 404 returned error can't find the container with id 32c790c88bd5dbd150e1819620ac3c308e59bf7d3a6facaee240aff748c3eafb Feb 16 21:45:20 crc kubenswrapper[4805]: I0216 21:45:20.566787 4805 generic.go:334] "Generic (PLEG): container finished" podID="372c9570-8d33-406b-af30-4a98b6a177ce" containerID="f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2" exitCode=0 Feb 16 21:45:20 crc kubenswrapper[4805]: I0216 21:45:20.566852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2nfd" event={"ID":"372c9570-8d33-406b-af30-4a98b6a177ce","Type":"ContainerDied","Data":"f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2"} Feb 16 21:45:20 crc kubenswrapper[4805]: I0216 21:45:20.566919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2nfd" event={"ID":"372c9570-8d33-406b-af30-4a98b6a177ce","Type":"ContainerStarted","Data":"32c790c88bd5dbd150e1819620ac3c308e59bf7d3a6facaee240aff748c3eafb"} Feb 16 21:45:21 crc kubenswrapper[4805]: E0216 21:45:21.600972 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:45:22 crc kubenswrapper[4805]: I0216 21:45:22.596437 4805 generic.go:334] "Generic (PLEG): container finished" podID="372c9570-8d33-406b-af30-4a98b6a177ce" containerID="d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc" exitCode=0 Feb 16 21:45:22 crc kubenswrapper[4805]: I0216 21:45:22.596566 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2nfd" event={"ID":"372c9570-8d33-406b-af30-4a98b6a177ce","Type":"ContainerDied","Data":"d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc"} Feb 16 21:45:23 crc kubenswrapper[4805]: I0216 21:45:23.647178 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2nfd" event={"ID":"372c9570-8d33-406b-af30-4a98b6a177ce","Type":"ContainerStarted","Data":"ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d"} Feb 16 21:45:23 crc kubenswrapper[4805]: I0216 21:45:23.690810 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q2nfd" podStartSLOduration=2.263435387 podStartE2EDuration="4.690768284s" podCreationTimestamp="2026-02-16 21:45:19 +0000 UTC" firstStartedPulling="2026-02-16 21:45:20.570468189 +0000 UTC m=+2938.389151524" lastFinishedPulling="2026-02-16 21:45:22.997801116 +0000 UTC m=+2940.816484421" observedRunningTime="2026-02-16 21:45:23.654160844 +0000 UTC m=+2941.472844149" watchObservedRunningTime="2026-02-16 21:45:23.690768284 +0000 UTC m=+2941.509451579" Feb 16 21:45:25 crc kubenswrapper[4805]: E0216 21:45:25.601275 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:45:27 crc kubenswrapper[4805]: I0216 21:45:27.598100 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:45:27 crc kubenswrapper[4805]: E0216 21:45:27.599158 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:45:29 crc kubenswrapper[4805]: I0216 21:45:29.430379 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:29 crc kubenswrapper[4805]: I0216 21:45:29.430754 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:29 crc kubenswrapper[4805]: I0216 21:45:29.488784 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:29 crc kubenswrapper[4805]: I0216 21:45:29.739825 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:29 crc kubenswrapper[4805]: I0216 21:45:29.850478 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2nfd"] Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.042759 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl"] Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.045396 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.053668 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.053902 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.054095 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.054337 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.058507 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl"] Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.149399 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dccph\" (UniqueName: \"kubernetes.io/projected/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-kube-api-access-dccph\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-98zrl\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.149544 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-98zrl\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.149597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-98zrl\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.252152 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-98zrl\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.252222 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-98zrl\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.252369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dccph\" (UniqueName: \"kubernetes.io/projected/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-kube-api-access-dccph\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-98zrl\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.257865 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-98zrl\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.258153 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-98zrl\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.270863 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dccph\" (UniqueName: \"kubernetes.io/projected/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-kube-api-access-dccph\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-98zrl\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.394244 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.711778 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q2nfd" podUID="372c9570-8d33-406b-af30-4a98b6a177ce" containerName="registry-server" containerID="cri-o://ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d" gracePeriod=2 Feb 16 21:45:31 crc kubenswrapper[4805]: I0216 21:45:31.988799 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl"] Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.143811 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.174103 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-catalog-content\") pod \"372c9570-8d33-406b-af30-4a98b6a177ce\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.174207 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-utilities\") pod \"372c9570-8d33-406b-af30-4a98b6a177ce\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.174258 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsk9s\" (UniqueName: \"kubernetes.io/projected/372c9570-8d33-406b-af30-4a98b6a177ce-kube-api-access-qsk9s\") pod \"372c9570-8d33-406b-af30-4a98b6a177ce\" (UID: \"372c9570-8d33-406b-af30-4a98b6a177ce\") " Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.175407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-utilities" (OuterVolumeSpecName: "utilities") pod "372c9570-8d33-406b-af30-4a98b6a177ce" (UID: "372c9570-8d33-406b-af30-4a98b6a177ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.182326 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372c9570-8d33-406b-af30-4a98b6a177ce-kube-api-access-qsk9s" (OuterVolumeSpecName: "kube-api-access-qsk9s") pod "372c9570-8d33-406b-af30-4a98b6a177ce" (UID: "372c9570-8d33-406b-af30-4a98b6a177ce"). InnerVolumeSpecName "kube-api-access-qsk9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.239874 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "372c9570-8d33-406b-af30-4a98b6a177ce" (UID: "372c9570-8d33-406b-af30-4a98b6a177ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.277576 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.277623 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c9570-8d33-406b-af30-4a98b6a177ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.277637 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsk9s\" (UniqueName: \"kubernetes.io/projected/372c9570-8d33-406b-af30-4a98b6a177ce-kube-api-access-qsk9s\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.728483 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2nfd" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.729053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2nfd" event={"ID":"372c9570-8d33-406b-af30-4a98b6a177ce","Type":"ContainerDied","Data":"ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d"} Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.729203 4805 scope.go:117] "RemoveContainer" containerID="ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.728377 4805 generic.go:334] "Generic (PLEG): container finished" podID="372c9570-8d33-406b-af30-4a98b6a177ce" containerID="ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d" exitCode=0 Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.729699 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2nfd" event={"ID":"372c9570-8d33-406b-af30-4a98b6a177ce","Type":"ContainerDied","Data":"32c790c88bd5dbd150e1819620ac3c308e59bf7d3a6facaee240aff748c3eafb"} Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.732147 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" event={"ID":"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb","Type":"ContainerStarted","Data":"e0299df66ab392abc70fb5aae01b2ebd98dd9a7be40d05fdeb425cb351c0d083"} Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.754461 4805 scope.go:117] "RemoveContainer" containerID="d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.793263 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2nfd"] Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.805574 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q2nfd"] Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.811950 4805 scope.go:117] "RemoveContainer" containerID="f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.850086 4805 scope.go:117] "RemoveContainer" containerID="ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d" Feb 16 21:45:32 crc kubenswrapper[4805]: E0216 21:45:32.850449 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d\": container with ID starting with ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d not found: ID does not exist" containerID="ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.850480 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d"} err="failed to get container status \"ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d\": rpc error: code = NotFound desc = could not find container \"ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d\": container with ID starting with ba959a25028152fc599e7f9bde3f57a3f125b2e37d4b4b9ad48b9b6df222046d not found: ID does not exist" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.850502 4805 scope.go:117] "RemoveContainer" containerID="d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc" Feb 16 21:45:32 crc kubenswrapper[4805]: E0216 21:45:32.850963 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc\": container with ID starting with d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc not found: ID does not exist" containerID="d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.851007 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc"} err="failed to get container status \"d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc\": rpc error: code = NotFound desc = could not find container \"d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc\": container with ID starting with d3dc7bccafd0b24ef5249127d40fdd600258ae349150737512e46ee7aa26d2fc not found: ID does not exist" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.851021 4805 scope.go:117] "RemoveContainer" containerID="f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2" Feb 16 21:45:32 crc kubenswrapper[4805]: E0216 21:45:32.851252 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2\": container with ID starting with f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2 not found: ID does not exist" containerID="f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2" Feb 16 21:45:32 crc kubenswrapper[4805]: I0216 21:45:32.851276 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2"} err="failed to get container status \"f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2\": rpc error: code = NotFound desc = could not find container \"f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2\": container with ID starting with f1e5342015f2e68ded598b4999b61b156b0c02f7ddcac87f1916b77151a3aea2 not found: ID does not exist" Feb 16 21:45:33 crc kubenswrapper[4805]: I0216 21:45:33.618664 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372c9570-8d33-406b-af30-4a98b6a177ce" path="/var/lib/kubelet/pods/372c9570-8d33-406b-af30-4a98b6a177ce/volumes" Feb 16 21:45:33 crc kubenswrapper[4805]: I0216 21:45:33.745441 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" event={"ID":"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb","Type":"ContainerStarted","Data":"ce0493643e43f344c20aa67c1a9a24a4989ee6ce330e0bf02784966786eeb3ee"} Feb 16 21:45:33 crc kubenswrapper[4805]: I0216 21:45:33.772187 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" podStartSLOduration=2.247462062 podStartE2EDuration="2.772157756s" podCreationTimestamp="2026-02-16 21:45:31 +0000 UTC" firstStartedPulling="2026-02-16 21:45:31.996572411 +0000 UTC m=+2949.815255706" lastFinishedPulling="2026-02-16 21:45:32.521268105 +0000 UTC m=+2950.339951400" observedRunningTime="2026-02-16 21:45:33.761825457 +0000 UTC m=+2951.580508782" watchObservedRunningTime="2026-02-16 21:45:33.772157756 +0000 UTC m=+2951.590841071" Feb 16 21:45:36 crc kubenswrapper[4805]: E0216 21:45:36.600409 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:45:37 crc kubenswrapper[4805]: I0216 21:45:37.544141 4805 scope.go:117] "RemoveContainer" containerID="3f3f32f577e09b73acc30dcc1a6b8beb3b86c342fafcdf32bbfd6297b2af860d" Feb 16 21:45:38 crc kubenswrapper[4805]: I0216 21:45:38.598893 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:45:38 crc kubenswrapper[4805]: E0216 21:45:38.599977 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:45:38 crc kubenswrapper[4805]: E0216 21:45:38.600136 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:45:48 crc kubenswrapper[4805]: E0216 21:45:48.601463 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:45:49 crc kubenswrapper[4805]: E0216 21:45:49.598950 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:45:52 crc kubenswrapper[4805]: I0216 21:45:52.598075 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:45:52 crc kubenswrapper[4805]: E0216 21:45:52.598910 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:46:01 crc kubenswrapper[4805]: E0216 21:46:01.600278 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:46:04 crc kubenswrapper[4805]: E0216 21:46:04.600553 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:46:05 crc kubenswrapper[4805]: I0216 21:46:05.598094 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:46:05 crc kubenswrapper[4805]: E0216 21:46:05.598653 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:46:16 crc kubenswrapper[4805]: E0216 21:46:16.604155 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:46:17 crc kubenswrapper[4805]: E0216 21:46:17.611150 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:46:20 crc kubenswrapper[4805]: I0216 21:46:20.597319 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:46:20 crc kubenswrapper[4805]: E0216 21:46:20.597981 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:46:29 crc kubenswrapper[4805]: I0216 21:46:29.600312 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:46:29 crc kubenswrapper[4805]: E0216 21:46:29.697941 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:46:29 crc kubenswrapper[4805]: E0216 21:46:29.698012 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:46:29 crc kubenswrapper[4805]: E0216 21:46:29.698137 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:46:29 crc kubenswrapper[4805]: E0216 21:46:29.699639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:46:31 crc kubenswrapper[4805]: E0216 21:46:31.600738 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:46:35 crc kubenswrapper[4805]: I0216 21:46:35.598383 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:46:35 crc kubenswrapper[4805]: E0216 21:46:35.599273 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:46:42 crc kubenswrapper[4805]: E0216 21:46:42.601109 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:46:44 crc kubenswrapper[4805]: E0216 21:46:44.600452 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:46:48 crc kubenswrapper[4805]: I0216 21:46:48.597835 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:46:48 crc kubenswrapper[4805]: E0216 21:46:48.598889 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:46:55 crc kubenswrapper[4805]: E0216 21:46:55.634194 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:46:56 crc kubenswrapper[4805]: E0216 21:46:56.728773 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:46:56 crc kubenswrapper[4805]: E0216 21:46:56.729136 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:46:56 crc kubenswrapper[4805]: E0216 21:46:56.729356 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:46:56 crc kubenswrapper[4805]: E0216 21:46:56.730642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:47:00 crc kubenswrapper[4805]: I0216 21:47:00.597859 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:47:00 crc kubenswrapper[4805]: E0216 21:47:00.598657 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:47:07 crc kubenswrapper[4805]: E0216 21:47:07.600232 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:47:09 crc kubenswrapper[4805]: E0216 21:47:09.601954 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:47:14 crc kubenswrapper[4805]: I0216 21:47:14.599420 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:47:14 crc kubenswrapper[4805]: E0216 21:47:14.600540 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:47:21 crc kubenswrapper[4805]: E0216 21:47:21.602348 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:47:21 crc kubenswrapper[4805]: E0216 21:47:21.603121 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:47:29 crc kubenswrapper[4805]: I0216 21:47:29.597908 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:47:29 crc kubenswrapper[4805]: E0216 21:47:29.598943 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:47:33 crc kubenswrapper[4805]: E0216 21:47:33.618517 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:47:36 crc kubenswrapper[4805]: E0216 21:47:36.600097 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:47:41 crc kubenswrapper[4805]: I0216 21:47:41.598226 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:47:41 crc kubenswrapper[4805]: E0216 21:47:41.600381 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:47:48 crc kubenswrapper[4805]: E0216 21:47:48.600531 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:47:51 crc kubenswrapper[4805]: E0216 21:47:51.600375 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:47:55 crc kubenswrapper[4805]: I0216 21:47:55.599155 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:47:55 crc kubenswrapper[4805]: E0216 21:47:55.600418 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:48:02 crc kubenswrapper[4805]: E0216 21:48:02.600486 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:48:03 crc kubenswrapper[4805]: E0216 21:48:03.611984 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:48:06 crc kubenswrapper[4805]: I0216 21:48:06.598274 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:48:06 crc kubenswrapper[4805]: E0216 21:48:06.598917 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:48:17 crc kubenswrapper[4805]: E0216 21:48:17.600216 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:48:17 crc kubenswrapper[4805]: E0216 21:48:17.600281 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:48:18 crc kubenswrapper[4805]: I0216 21:48:18.598014 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:48:18 crc kubenswrapper[4805]: E0216 21:48:18.599049 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:48:29 crc kubenswrapper[4805]: E0216 21:48:29.602033 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:48:30 crc kubenswrapper[4805]: I0216 21:48:30.598230 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:48:30 crc kubenswrapper[4805]: E0216 21:48:30.598980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:48:30 crc kubenswrapper[4805]: E0216 21:48:30.599844 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:48:41 crc kubenswrapper[4805]: I0216 21:48:41.598286 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:48:41 crc kubenswrapper[4805]: E0216 21:48:41.599172 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:48:42 crc kubenswrapper[4805]: E0216 21:48:42.600389 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:48:43 crc kubenswrapper[4805]: E0216 21:48:43.608534 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:48:54 crc kubenswrapper[4805]: I0216 21:48:54.598794 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:48:54 crc kubenswrapper[4805]: E0216 21:48:54.599741 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:48:55 crc kubenswrapper[4805]: E0216 21:48:55.600152 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:48:58 crc kubenswrapper[4805]: E0216 21:48:58.599675 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:49:06 crc kubenswrapper[4805]: I0216 21:49:06.598665 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:49:06 crc kubenswrapper[4805]: E0216 21:49:06.599544 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:49:07 crc kubenswrapper[4805]: E0216 21:49:07.602642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:49:12 crc kubenswrapper[4805]: E0216 21:49:12.602679 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:49:20 crc kubenswrapper[4805]: I0216 21:49:20.599096 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:49:21 crc kubenswrapper[4805]: I0216 21:49:21.422042 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"4bde70584d86d2ced57fa80a14a9cd8751c173f2f956e7ceb2370c0e43f83d0f"} Feb 16 21:49:21 crc kubenswrapper[4805]: E0216 21:49:21.600959 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:49:25 crc kubenswrapper[4805]: E0216 21:49:25.604493 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:49:34 crc kubenswrapper[4805]: E0216 21:49:34.601536 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:49:39 crc kubenswrapper[4805]: E0216 21:49:39.600800 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:49:46 crc kubenswrapper[4805]: E0216 21:49:46.601026 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:49:53 crc kubenswrapper[4805]: E0216 21:49:53.612439 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:49:58 crc kubenswrapper[4805]: E0216 21:49:58.601990 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:50:06 crc kubenswrapper[4805]: E0216 21:50:06.603279 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:50:12 crc kubenswrapper[4805]: E0216 21:50:12.600660 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:50:20 crc kubenswrapper[4805]: E0216 21:50:20.602122 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:50:26 crc kubenswrapper[4805]: E0216 21:50:26.600857 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:50:35 crc kubenswrapper[4805]: E0216 21:50:35.601408 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:50:38 crc kubenswrapper[4805]: E0216 21:50:38.601555 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:50:48 crc kubenswrapper[4805]: E0216 21:50:48.601004 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:50:51 crc kubenswrapper[4805]: E0216 21:50:51.601146 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:51:02 crc kubenswrapper[4805]: E0216 21:51:02.601198 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:51:05 crc kubenswrapper[4805]: E0216 21:51:05.600813 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:51:13 crc kubenswrapper[4805]: E0216 21:51:13.607369 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.384945 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mpv2l"] Feb 16 21:51:18 crc kubenswrapper[4805]: E0216 21:51:18.386327 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372c9570-8d33-406b-af30-4a98b6a177ce" containerName="extract-content" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.386351 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="372c9570-8d33-406b-af30-4a98b6a177ce" containerName="extract-content" Feb 16 21:51:18 crc kubenswrapper[4805]: E0216 21:51:18.386411 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372c9570-8d33-406b-af30-4a98b6a177ce" containerName="extract-utilities" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.386424 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="372c9570-8d33-406b-af30-4a98b6a177ce" containerName="extract-utilities" Feb 16 21:51:18 crc kubenswrapper[4805]: E0216 21:51:18.386443 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372c9570-8d33-406b-af30-4a98b6a177ce" containerName="registry-server" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.386455 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="372c9570-8d33-406b-af30-4a98b6a177ce" containerName="registry-server" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.386880 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="372c9570-8d33-406b-af30-4a98b6a177ce" containerName="registry-server" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.389554 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.404191 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpv2l"] Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.536414 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-utilities\") pod \"redhat-operators-mpv2l\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.536468 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-catalog-content\") pod \"redhat-operators-mpv2l\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.536543 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp9h2\" (UniqueName: \"kubernetes.io/projected/0897f8d4-7122-4a90-9b0d-5bff16c70111-kube-api-access-cp9h2\") pod \"redhat-operators-mpv2l\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: E0216 21:51:18.599759 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.638260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-utilities\") pod \"redhat-operators-mpv2l\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.638326 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-catalog-content\") pod \"redhat-operators-mpv2l\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.638429 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp9h2\" (UniqueName: \"kubernetes.io/projected/0897f8d4-7122-4a90-9b0d-5bff16c70111-kube-api-access-cp9h2\") pod \"redhat-operators-mpv2l\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.638877 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-catalog-content\") pod \"redhat-operators-mpv2l\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.639153 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-utilities\") pod \"redhat-operators-mpv2l\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.662813 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp9h2\" (UniqueName: \"kubernetes.io/projected/0897f8d4-7122-4a90-9b0d-5bff16c70111-kube-api-access-cp9h2\") pod \"redhat-operators-mpv2l\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:18 crc kubenswrapper[4805]: I0216 21:51:18.720549 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:19 crc kubenswrapper[4805]: I0216 21:51:19.287696 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpv2l"] Feb 16 21:51:19 crc kubenswrapper[4805]: I0216 21:51:19.723181 4805 generic.go:334] "Generic (PLEG): container finished" podID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerID="0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0" exitCode=0 Feb 16 21:51:19 crc kubenswrapper[4805]: I0216 21:51:19.723235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpv2l" event={"ID":"0897f8d4-7122-4a90-9b0d-5bff16c70111","Type":"ContainerDied","Data":"0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0"} Feb 16 21:51:19 crc kubenswrapper[4805]: I0216 21:51:19.723303 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpv2l" event={"ID":"0897f8d4-7122-4a90-9b0d-5bff16c70111","Type":"ContainerStarted","Data":"81b3fedcee02cf02cfe0ad2382750c49d24ed60d74703b8ae0991dec460303e3"} Feb 16 21:51:20 crc kubenswrapper[4805]: I0216 21:51:20.736030 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpv2l" event={"ID":"0897f8d4-7122-4a90-9b0d-5bff16c70111","Type":"ContainerStarted","Data":"26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab"} Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.588574 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dxx7x"] Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.593623 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.634958 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxx7x"] Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.716154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-utilities\") pod \"certified-operators-dxx7x\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.716267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82b62\" (UniqueName: \"kubernetes.io/projected/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-kube-api-access-82b62\") pod \"certified-operators-dxx7x\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.716418 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-catalog-content\") pod \"certified-operators-dxx7x\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.818515 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-utilities\") pod \"certified-operators-dxx7x\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.818598 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82b62\" (UniqueName: \"kubernetes.io/projected/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-kube-api-access-82b62\") pod \"certified-operators-dxx7x\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.818758 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-catalog-content\") pod \"certified-operators-dxx7x\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.819211 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-utilities\") pod \"certified-operators-dxx7x\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.819261 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-catalog-content\") pod \"certified-operators-dxx7x\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.842660 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82b62\" (UniqueName: \"kubernetes.io/projected/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-kube-api-access-82b62\") pod \"certified-operators-dxx7x\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:21 crc kubenswrapper[4805]: I0216 21:51:21.923522 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:22 crc kubenswrapper[4805]: W0216 21:51:22.482519 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bf5a2f9_bc8f_4fa4_8767_3467702ee93d.slice/crio-008b9df79e6f37dc33e9db55b310edc2f7eeb5e346db200724eddc792e68bf4c WatchSource:0}: Error finding container 008b9df79e6f37dc33e9db55b310edc2f7eeb5e346db200724eddc792e68bf4c: Status 404 returned error can't find the container with id 008b9df79e6f37dc33e9db55b310edc2f7eeb5e346db200724eddc792e68bf4c Feb 16 21:51:22 crc kubenswrapper[4805]: I0216 21:51:22.493219 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxx7x"] Feb 16 21:51:22 crc kubenswrapper[4805]: I0216 21:51:22.758830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxx7x" event={"ID":"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d","Type":"ContainerStarted","Data":"8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c"} Feb 16 21:51:22 crc kubenswrapper[4805]: I0216 21:51:22.758873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxx7x" event={"ID":"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d","Type":"ContainerStarted","Data":"008b9df79e6f37dc33e9db55b310edc2f7eeb5e346db200724eddc792e68bf4c"} Feb 16 21:51:23 crc kubenswrapper[4805]: I0216 21:51:23.769027 4805 generic.go:334] "Generic (PLEG): container finished" podID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerID="8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c" exitCode=0 Feb 16 21:51:23 crc kubenswrapper[4805]: I0216 21:51:23.769155 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxx7x" event={"ID":"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d","Type":"ContainerDied","Data":"8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c"} Feb 16 21:51:24 crc kubenswrapper[4805]: I0216 21:51:24.778808 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxx7x" event={"ID":"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d","Type":"ContainerStarted","Data":"10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858"} Feb 16 21:51:25 crc kubenswrapper[4805]: E0216 21:51:25.600031 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:51:25 crc kubenswrapper[4805]: I0216 21:51:25.794629 4805 generic.go:334] "Generic (PLEG): container finished" podID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerID="26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab" exitCode=0 Feb 16 21:51:25 crc kubenswrapper[4805]: I0216 21:51:25.794771 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpv2l" event={"ID":"0897f8d4-7122-4a90-9b0d-5bff16c70111","Type":"ContainerDied","Data":"26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab"} Feb 16 21:51:26 crc kubenswrapper[4805]: I0216 21:51:26.808679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpv2l" event={"ID":"0897f8d4-7122-4a90-9b0d-5bff16c70111","Type":"ContainerStarted","Data":"3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506"} Feb 16 21:51:26 crc kubenswrapper[4805]: I0216 21:51:26.813575 4805 generic.go:334] "Generic (PLEG): container finished" podID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerID="10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858" exitCode=0 Feb 16 21:51:26 crc kubenswrapper[4805]: I0216 21:51:26.813617 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxx7x" event={"ID":"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d","Type":"ContainerDied","Data":"10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858"} Feb 16 21:51:26 crc kubenswrapper[4805]: I0216 21:51:26.839883 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mpv2l" podStartSLOduration=2.340153815 podStartE2EDuration="8.83986413s" podCreationTimestamp="2026-02-16 21:51:18 +0000 UTC" firstStartedPulling="2026-02-16 21:51:19.726428159 +0000 UTC m=+3297.545111454" lastFinishedPulling="2026-02-16 21:51:26.226138474 +0000 UTC m=+3304.044821769" observedRunningTime="2026-02-16 21:51:26.831039521 +0000 UTC m=+3304.649722826" watchObservedRunningTime="2026-02-16 21:51:26.83986413 +0000 UTC m=+3304.658547425" Feb 16 21:51:27 crc kubenswrapper[4805]: I0216 21:51:27.825559 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxx7x" event={"ID":"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d","Type":"ContainerStarted","Data":"3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7"} Feb 16 21:51:27 crc kubenswrapper[4805]: I0216 21:51:27.856665 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dxx7x" podStartSLOduration=3.410412546 podStartE2EDuration="6.856644964s" podCreationTimestamp="2026-02-16 21:51:21 +0000 UTC" firstStartedPulling="2026-02-16 21:51:23.772865476 +0000 UTC m=+3301.591548771" lastFinishedPulling="2026-02-16 21:51:27.219097894 +0000 UTC m=+3305.037781189" observedRunningTime="2026-02-16 21:51:27.847763603 +0000 UTC m=+3305.666446908" watchObservedRunningTime="2026-02-16 21:51:27.856644964 +0000 UTC m=+3305.675328249" Feb 16 21:51:28 crc kubenswrapper[4805]: I0216 21:51:28.721027 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:28 crc kubenswrapper[4805]: I0216 21:51:28.721325 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:29 crc kubenswrapper[4805]: I0216 21:51:29.803334 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mpv2l" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="registry-server" probeResult="failure" output=< Feb 16 21:51:29 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:51:29 crc kubenswrapper[4805]: > Feb 16 21:51:31 crc kubenswrapper[4805]: I0216 21:51:31.923890 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:31 crc kubenswrapper[4805]: I0216 21:51:31.924255 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:31 crc kubenswrapper[4805]: I0216 21:51:31.983217 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:32 crc kubenswrapper[4805]: I0216 21:51:32.961357 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:33 crc kubenswrapper[4805]: I0216 21:51:33.030676 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxx7x"] Feb 16 21:51:33 crc kubenswrapper[4805]: I0216 21:51:33.609188 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:51:33 crc kubenswrapper[4805]: E0216 21:51:33.742756 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:51:33 crc kubenswrapper[4805]: E0216 21:51:33.742809 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:51:33 crc kubenswrapper[4805]: E0216 21:51:33.742937 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:51:33 crc kubenswrapper[4805]: E0216 21:51:33.744133 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:51:34 crc kubenswrapper[4805]: I0216 21:51:34.920601 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dxx7x" podUID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerName="registry-server" containerID="cri-o://3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7" gracePeriod=2 Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.617062 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.668096 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82b62\" (UniqueName: \"kubernetes.io/projected/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-kube-api-access-82b62\") pod \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.668253 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-utilities\") pod \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.668326 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-catalog-content\") pod \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\" (UID: \"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d\") " Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.677574 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-utilities" (OuterVolumeSpecName: "utilities") pod "4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" (UID: "4bf5a2f9-bc8f-4fa4-8767-3467702ee93d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.681659 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-kube-api-access-82b62" (OuterVolumeSpecName: "kube-api-access-82b62") pod "4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" (UID: "4bf5a2f9-bc8f-4fa4-8767-3467702ee93d"). InnerVolumeSpecName "kube-api-access-82b62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.740316 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" (UID: "4bf5a2f9-bc8f-4fa4-8767-3467702ee93d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.772428 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.772460 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82b62\" (UniqueName: \"kubernetes.io/projected/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-kube-api-access-82b62\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.772482 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.932336 4805 generic.go:334] "Generic (PLEG): container finished" podID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerID="3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7" exitCode=0 Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.932383 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxx7x" event={"ID":"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d","Type":"ContainerDied","Data":"3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7"} Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.932408 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxx7x" event={"ID":"4bf5a2f9-bc8f-4fa4-8767-3467702ee93d","Type":"ContainerDied","Data":"008b9df79e6f37dc33e9db55b310edc2f7eeb5e346db200724eddc792e68bf4c"} Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.932426 4805 scope.go:117] "RemoveContainer" containerID="3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.932564 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxx7x" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.968476 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxx7x"] Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.970314 4805 scope.go:117] "RemoveContainer" containerID="10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858" Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.981030 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dxx7x"] Feb 16 21:51:35 crc kubenswrapper[4805]: I0216 21:51:35.998147 4805 scope.go:117] "RemoveContainer" containerID="8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c" Feb 16 21:51:36 crc kubenswrapper[4805]: I0216 21:51:36.056297 4805 scope.go:117] "RemoveContainer" containerID="3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7" Feb 16 21:51:36 crc kubenswrapper[4805]: E0216 21:51:36.056998 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7\": container with ID starting with 3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7 not found: ID does not exist" containerID="3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7" Feb 16 21:51:36 crc kubenswrapper[4805]: I0216 21:51:36.057067 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7"} err="failed to get container status \"3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7\": rpc error: code = NotFound desc = could not find container \"3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7\": container with ID starting with 3f08d3e26865599f74bd24703ccf65c01287e23471f1318b055d86a7dc2f2ca7 not found: ID does not exist" Feb 16 21:51:36 crc kubenswrapper[4805]: I0216 21:51:36.057100 4805 scope.go:117] "RemoveContainer" containerID="10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858" Feb 16 21:51:36 crc kubenswrapper[4805]: E0216 21:51:36.057573 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858\": container with ID starting with 10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858 not found: ID does not exist" containerID="10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858" Feb 16 21:51:36 crc kubenswrapper[4805]: I0216 21:51:36.057609 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858"} err="failed to get container status \"10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858\": rpc error: code = NotFound desc = could not find container \"10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858\": container with ID starting with 10c5fa837157d6f7e110c021edd7b04955b03d0b95bad6049bb49a98d28b5858 not found: ID does not exist" Feb 16 21:51:36 crc kubenswrapper[4805]: I0216 21:51:36.057626 4805 scope.go:117] "RemoveContainer" containerID="8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c" Feb 16 21:51:36 crc kubenswrapper[4805]: E0216 21:51:36.059079 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c\": container with ID starting with 8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c not found: ID does not exist" containerID="8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c" Feb 16 21:51:36 crc kubenswrapper[4805]: I0216 21:51:36.059118 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c"} err="failed to get container status \"8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c\": rpc error: code = NotFound desc = could not find container \"8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c\": container with ID starting with 8f849dfc07a3831a8cc5150713b7563073ae0babd686907ea3a5e5619fc2e81c not found: ID does not exist" Feb 16 21:51:37 crc kubenswrapper[4805]: E0216 21:51:37.599177 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:51:37 crc kubenswrapper[4805]: I0216 21:51:37.610926 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" path="/var/lib/kubelet/pods/4bf5a2f9-bc8f-4fa4-8767-3467702ee93d/volumes" Feb 16 21:51:38 crc kubenswrapper[4805]: I0216 21:51:38.099322 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:51:38 crc kubenswrapper[4805]: I0216 21:51:38.099699 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:51:39 crc kubenswrapper[4805]: I0216 21:51:39.775635 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mpv2l" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="registry-server" probeResult="failure" output=< Feb 16 21:51:39 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:51:39 crc kubenswrapper[4805]: > Feb 16 21:51:46 crc kubenswrapper[4805]: I0216 21:51:46.074903 4805 generic.go:334] "Generic (PLEG): container finished" podID="d937b07f-01b5-4ac1-8cc7-c1db2e1876bb" containerID="ce0493643e43f344c20aa67c1a9a24a4989ee6ce330e0bf02784966786eeb3ee" exitCode=2 Feb 16 21:51:46 crc kubenswrapper[4805]: I0216 21:51:46.075041 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" event={"ID":"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb","Type":"ContainerDied","Data":"ce0493643e43f344c20aa67c1a9a24a4989ee6ce330e0bf02784966786eeb3ee"} Feb 16 21:51:46 crc kubenswrapper[4805]: E0216 21:51:46.600766 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.647881 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.775053 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-ssh-key-openstack-edpm-ipam\") pod \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.775527 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dccph\" (UniqueName: \"kubernetes.io/projected/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-kube-api-access-dccph\") pod \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.775696 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-inventory\") pod \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\" (UID: \"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb\") " Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.789338 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-kube-api-access-dccph" (OuterVolumeSpecName: "kube-api-access-dccph") pod "d937b07f-01b5-4ac1-8cc7-c1db2e1876bb" (UID: "d937b07f-01b5-4ac1-8cc7-c1db2e1876bb"). InnerVolumeSpecName "kube-api-access-dccph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.811272 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-inventory" (OuterVolumeSpecName: "inventory") pod "d937b07f-01b5-4ac1-8cc7-c1db2e1876bb" (UID: "d937b07f-01b5-4ac1-8cc7-c1db2e1876bb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.819526 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d937b07f-01b5-4ac1-8cc7-c1db2e1876bb" (UID: "d937b07f-01b5-4ac1-8cc7-c1db2e1876bb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.878374 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.878412 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:47 crc kubenswrapper[4805]: I0216 21:51:47.878427 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dccph\" (UniqueName: \"kubernetes.io/projected/d937b07f-01b5-4ac1-8cc7-c1db2e1876bb-kube-api-access-dccph\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:48 crc kubenswrapper[4805]: I0216 21:51:48.100500 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" event={"ID":"d937b07f-01b5-4ac1-8cc7-c1db2e1876bb","Type":"ContainerDied","Data":"e0299df66ab392abc70fb5aae01b2ebd98dd9a7be40d05fdeb425cb351c0d083"} Feb 16 21:51:48 crc kubenswrapper[4805]: I0216 21:51:48.100550 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0299df66ab392abc70fb5aae01b2ebd98dd9a7be40d05fdeb425cb351c0d083" Feb 16 21:51:48 crc kubenswrapper[4805]: I0216 21:51:48.100587 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-98zrl" Feb 16 21:51:48 crc kubenswrapper[4805]: E0216 21:51:48.600291 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:51:49 crc kubenswrapper[4805]: I0216 21:51:49.773221 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mpv2l" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="registry-server" probeResult="failure" output=< Feb 16 21:51:49 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 21:51:49 crc kubenswrapper[4805]: > Feb 16 21:51:58 crc kubenswrapper[4805]: I0216 21:51:58.777109 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:58 crc kubenswrapper[4805]: I0216 21:51:58.834154 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:51:59 crc kubenswrapper[4805]: I0216 21:51:59.030751 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpv2l"] Feb 16 21:52:00 crc kubenswrapper[4805]: I0216 21:52:00.237174 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mpv2l" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="registry-server" containerID="cri-o://3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506" gracePeriod=2 Feb 16 21:52:00 crc kubenswrapper[4805]: E0216 21:52:00.702274 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:52:00 crc kubenswrapper[4805]: E0216 21:52:00.703029 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:52:00 crc kubenswrapper[4805]: E0216 21:52:00.703193 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:52:00 crc kubenswrapper[4805]: E0216 21:52:00.704814 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:52:00 crc kubenswrapper[4805]: I0216 21:52:00.883290 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.019120 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-utilities\") pod \"0897f8d4-7122-4a90-9b0d-5bff16c70111\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.019338 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-catalog-content\") pod \"0897f8d4-7122-4a90-9b0d-5bff16c70111\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.019388 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp9h2\" (UniqueName: \"kubernetes.io/projected/0897f8d4-7122-4a90-9b0d-5bff16c70111-kube-api-access-cp9h2\") pod \"0897f8d4-7122-4a90-9b0d-5bff16c70111\" (UID: \"0897f8d4-7122-4a90-9b0d-5bff16c70111\") " Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.019974 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-utilities" (OuterVolumeSpecName: "utilities") pod "0897f8d4-7122-4a90-9b0d-5bff16c70111" (UID: "0897f8d4-7122-4a90-9b0d-5bff16c70111"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.024939 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0897f8d4-7122-4a90-9b0d-5bff16c70111-kube-api-access-cp9h2" (OuterVolumeSpecName: "kube-api-access-cp9h2") pod "0897f8d4-7122-4a90-9b0d-5bff16c70111" (UID: "0897f8d4-7122-4a90-9b0d-5bff16c70111"). InnerVolumeSpecName "kube-api-access-cp9h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.123013 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.123053 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp9h2\" (UniqueName: \"kubernetes.io/projected/0897f8d4-7122-4a90-9b0d-5bff16c70111-kube-api-access-cp9h2\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.150925 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0897f8d4-7122-4a90-9b0d-5bff16c70111" (UID: "0897f8d4-7122-4a90-9b0d-5bff16c70111"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.225600 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0897f8d4-7122-4a90-9b0d-5bff16c70111-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.253677 4805 generic.go:334] "Generic (PLEG): container finished" podID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerID="3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506" exitCode=0 Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.253745 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpv2l" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.253743 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpv2l" event={"ID":"0897f8d4-7122-4a90-9b0d-5bff16c70111","Type":"ContainerDied","Data":"3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506"} Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.253947 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpv2l" event={"ID":"0897f8d4-7122-4a90-9b0d-5bff16c70111","Type":"ContainerDied","Data":"81b3fedcee02cf02cfe0ad2382750c49d24ed60d74703b8ae0991dec460303e3"} Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.253988 4805 scope.go:117] "RemoveContainer" containerID="3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.281920 4805 scope.go:117] "RemoveContainer" containerID="26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.297598 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpv2l"] Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.312838 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mpv2l"] Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.325368 4805 scope.go:117] "RemoveContainer" containerID="0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.359861 4805 scope.go:117] "RemoveContainer" containerID="3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506" Feb 16 21:52:01 crc kubenswrapper[4805]: E0216 21:52:01.362660 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506\": container with ID starting with 3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506 not found: ID does not exist" containerID="3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.362699 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506"} err="failed to get container status \"3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506\": rpc error: code = NotFound desc = could not find container \"3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506\": container with ID starting with 3d41dc47d2ae5260d2ad7c5a475fac6e65ee9ae1c5070213e47d401f7855f506 not found: ID does not exist" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.362740 4805 scope.go:117] "RemoveContainer" containerID="26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab" Feb 16 21:52:01 crc kubenswrapper[4805]: E0216 21:52:01.363189 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab\": container with ID starting with 26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab not found: ID does not exist" containerID="26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.363215 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab"} err="failed to get container status \"26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab\": rpc error: code = NotFound desc = could not find container \"26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab\": container with ID starting with 26ec06dbbf3fc2bd5cd6666eb16263937a488054b166428f1451227763c6b4ab not found: ID does not exist" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.363230 4805 scope.go:117] "RemoveContainer" containerID="0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0" Feb 16 21:52:01 crc kubenswrapper[4805]: E0216 21:52:01.363687 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0\": container with ID starting with 0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0 not found: ID does not exist" containerID="0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.363735 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0"} err="failed to get container status \"0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0\": rpc error: code = NotFound desc = could not find container \"0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0\": container with ID starting with 0530e0f1b38dd207a19de110684cf5c7d179dea6b6358837bb13e1a70e1910d0 not found: ID does not exist" Feb 16 21:52:01 crc kubenswrapper[4805]: E0216 21:52:01.599437 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:52:01 crc kubenswrapper[4805]: I0216 21:52:01.612409 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" path="/var/lib/kubelet/pods/0897f8d4-7122-4a90-9b0d-5bff16c70111/volumes" Feb 16 21:52:08 crc kubenswrapper[4805]: I0216 21:52:08.100021 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:52:08 crc kubenswrapper[4805]: I0216 21:52:08.100685 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:52:14 crc kubenswrapper[4805]: E0216 21:52:14.600622 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:52:14 crc kubenswrapper[4805]: E0216 21:52:14.601349 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:52:25 crc kubenswrapper[4805]: E0216 21:52:25.600254 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:52:27 crc kubenswrapper[4805]: E0216 21:52:27.600842 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:52:38 crc kubenswrapper[4805]: I0216 21:52:38.100097 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:52:38 crc kubenswrapper[4805]: I0216 21:52:38.101051 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:52:38 crc kubenswrapper[4805]: I0216 21:52:38.101182 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:52:38 crc kubenswrapper[4805]: I0216 21:52:38.103058 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4bde70584d86d2ced57fa80a14a9cd8751c173f2f956e7ceb2370c0e43f83d0f"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:52:38 crc kubenswrapper[4805]: I0216 21:52:38.103212 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://4bde70584d86d2ced57fa80a14a9cd8751c173f2f956e7ceb2370c0e43f83d0f" gracePeriod=600 Feb 16 21:52:38 crc kubenswrapper[4805]: E0216 21:52:38.599839 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:52:38 crc kubenswrapper[4805]: I0216 21:52:38.692444 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="4bde70584d86d2ced57fa80a14a9cd8751c173f2f956e7ceb2370c0e43f83d0f" exitCode=0 Feb 16 21:52:38 crc kubenswrapper[4805]: I0216 21:52:38.692504 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"4bde70584d86d2ced57fa80a14a9cd8751c173f2f956e7ceb2370c0e43f83d0f"} Feb 16 21:52:38 crc kubenswrapper[4805]: I0216 21:52:38.692535 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0"} Feb 16 21:52:38 crc kubenswrapper[4805]: I0216 21:52:38.692557 4805 scope.go:117] "RemoveContainer" containerID="54d6b96e174763aa2edcbb21fd457ffeef84fa81e8f6d83d1437c00a10198459" Feb 16 21:52:39 crc kubenswrapper[4805]: E0216 21:52:39.599389 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:52:49 crc kubenswrapper[4805]: E0216 21:52:49.600638 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:52:50 crc kubenswrapper[4805]: E0216 21:52:50.607163 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:53:01 crc kubenswrapper[4805]: E0216 21:53:01.601128 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:53:01 crc kubenswrapper[4805]: E0216 21:53:01.601213 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.085830 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m"] Feb 16 21:53:05 crc kubenswrapper[4805]: E0216 21:53:05.087012 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d937b07f-01b5-4ac1-8cc7-c1db2e1876bb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087040 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d937b07f-01b5-4ac1-8cc7-c1db2e1876bb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:53:05 crc kubenswrapper[4805]: E0216 21:53:05.087076 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerName="extract-utilities" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087090 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerName="extract-utilities" Feb 16 21:53:05 crc kubenswrapper[4805]: E0216 21:53:05.087126 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="extract-utilities" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087140 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="extract-utilities" Feb 16 21:53:05 crc kubenswrapper[4805]: E0216 21:53:05.087189 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerName="extract-content" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087202 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerName="extract-content" Feb 16 21:53:05 crc kubenswrapper[4805]: E0216 21:53:05.087226 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerName="registry-server" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087239 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerName="registry-server" Feb 16 21:53:05 crc kubenswrapper[4805]: E0216 21:53:05.087264 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="registry-server" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087277 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="registry-server" Feb 16 21:53:05 crc kubenswrapper[4805]: E0216 21:53:05.087326 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="extract-content" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087339 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="extract-content" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087841 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf5a2f9-bc8f-4fa4-8767-3467702ee93d" containerName="registry-server" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087878 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d937b07f-01b5-4ac1-8cc7-c1db2e1876bb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.087944 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0897f8d4-7122-4a90-9b0d-5bff16c70111" containerName="registry-server" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.089347 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.091674 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.092019 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.092284 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.092454 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.101256 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m"] Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.160443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4v99m\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.160707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdsq8\" (UniqueName: \"kubernetes.io/projected/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-kube-api-access-zdsq8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4v99m\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.160929 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4v99m\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.263056 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4v99m\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.263400 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4v99m\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.263513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdsq8\" (UniqueName: \"kubernetes.io/projected/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-kube-api-access-zdsq8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4v99m\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.270641 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4v99m\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.274569 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4v99m\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.286880 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdsq8\" (UniqueName: \"kubernetes.io/projected/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-kube-api-access-zdsq8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4v99m\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.427177 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:53:05 crc kubenswrapper[4805]: I0216 21:53:05.969245 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m"] Feb 16 21:53:06 crc kubenswrapper[4805]: I0216 21:53:06.006210 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" event={"ID":"54cd5193-d167-4eaa-86bf-3e5ca7a7703a","Type":"ContainerStarted","Data":"fd517d4766095eb9dabafe186fb207fa401eef7f8f63685e451bb855b3cdd50a"} Feb 16 21:53:07 crc kubenswrapper[4805]: I0216 21:53:07.022653 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" event={"ID":"54cd5193-d167-4eaa-86bf-3e5ca7a7703a","Type":"ContainerStarted","Data":"d65ded99b816897766e2c6f9a20b67de600d2bd4279b520723ab5ce3629936c1"} Feb 16 21:53:07 crc kubenswrapper[4805]: I0216 21:53:07.049012 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" podStartSLOduration=1.583480507 podStartE2EDuration="2.048995585s" podCreationTimestamp="2026-02-16 21:53:05 +0000 UTC" firstStartedPulling="2026-02-16 21:53:05.972860866 +0000 UTC m=+3403.791544161" lastFinishedPulling="2026-02-16 21:53:06.438375934 +0000 UTC m=+3404.257059239" observedRunningTime="2026-02-16 21:53:07.039075587 +0000 UTC m=+3404.857758882" watchObservedRunningTime="2026-02-16 21:53:07.048995585 +0000 UTC m=+3404.867678880" Feb 16 21:53:12 crc kubenswrapper[4805]: E0216 21:53:12.601402 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:53:15 crc kubenswrapper[4805]: E0216 21:53:15.601591 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:53:27 crc kubenswrapper[4805]: E0216 21:53:27.599988 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:53:29 crc kubenswrapper[4805]: E0216 21:53:29.601615 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:53:41 crc kubenswrapper[4805]: E0216 21:53:41.601972 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:53:43 crc kubenswrapper[4805]: E0216 21:53:43.609268 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:53:54 crc kubenswrapper[4805]: E0216 21:53:54.602339 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:53:55 crc kubenswrapper[4805]: E0216 21:53:55.602366 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:54:08 crc kubenswrapper[4805]: E0216 21:54:08.600180 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:54:08 crc kubenswrapper[4805]: E0216 21:54:08.600987 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:54:21 crc kubenswrapper[4805]: E0216 21:54:21.606280 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:54:22 crc kubenswrapper[4805]: E0216 21:54:22.602147 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:54:34 crc kubenswrapper[4805]: E0216 21:54:34.600035 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:54:34 crc kubenswrapper[4805]: E0216 21:54:34.600613 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.100126 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.101701 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.326609 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lmmfz"] Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.329245 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.342864 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmmfz"] Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.443844 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mk6\" (UniqueName: \"kubernetes.io/projected/f5721749-5a93-405e-a8a7-43f5b0cd68a1-kube-api-access-z5mk6\") pod \"redhat-marketplace-lmmfz\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.443918 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-utilities\") pod \"redhat-marketplace-lmmfz\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.444125 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-catalog-content\") pod \"redhat-marketplace-lmmfz\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.546658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-catalog-content\") pod \"redhat-marketplace-lmmfz\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.546847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5mk6\" (UniqueName: \"kubernetes.io/projected/f5721749-5a93-405e-a8a7-43f5b0cd68a1-kube-api-access-z5mk6\") pod \"redhat-marketplace-lmmfz\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.546910 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-utilities\") pod \"redhat-marketplace-lmmfz\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.547436 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-utilities\") pod \"redhat-marketplace-lmmfz\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.547525 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-catalog-content\") pod \"redhat-marketplace-lmmfz\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.567250 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5mk6\" (UniqueName: \"kubernetes.io/projected/f5721749-5a93-405e-a8a7-43f5b0cd68a1-kube-api-access-z5mk6\") pod \"redhat-marketplace-lmmfz\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:38 crc kubenswrapper[4805]: I0216 21:54:38.668615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:39 crc kubenswrapper[4805]: I0216 21:54:39.172453 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmmfz"] Feb 16 21:54:40 crc kubenswrapper[4805]: I0216 21:54:40.089828 4805 generic.go:334] "Generic (PLEG): container finished" podID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerID="2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890" exitCode=0 Feb 16 21:54:40 crc kubenswrapper[4805]: I0216 21:54:40.089875 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmmfz" event={"ID":"f5721749-5a93-405e-a8a7-43f5b0cd68a1","Type":"ContainerDied","Data":"2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890"} Feb 16 21:54:40 crc kubenswrapper[4805]: I0216 21:54:40.090139 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmmfz" event={"ID":"f5721749-5a93-405e-a8a7-43f5b0cd68a1","Type":"ContainerStarted","Data":"286fd5d9155003aff69111f4a1b0bd91390dada47dd0dd104599ebed7896f201"} Feb 16 21:54:41 crc kubenswrapper[4805]: I0216 21:54:41.100281 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmmfz" event={"ID":"f5721749-5a93-405e-a8a7-43f5b0cd68a1","Type":"ContainerStarted","Data":"b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb"} Feb 16 21:54:42 crc kubenswrapper[4805]: I0216 21:54:42.112638 4805 generic.go:334] "Generic (PLEG): container finished" podID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerID="b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb" exitCode=0 Feb 16 21:54:42 crc kubenswrapper[4805]: I0216 21:54:42.112682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmmfz" event={"ID":"f5721749-5a93-405e-a8a7-43f5b0cd68a1","Type":"ContainerDied","Data":"b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb"} Feb 16 21:54:43 crc kubenswrapper[4805]: I0216 21:54:43.126551 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmmfz" event={"ID":"f5721749-5a93-405e-a8a7-43f5b0cd68a1","Type":"ContainerStarted","Data":"9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595"} Feb 16 21:54:43 crc kubenswrapper[4805]: I0216 21:54:43.154078 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lmmfz" podStartSLOduration=2.7467442269999998 podStartE2EDuration="5.154060193s" podCreationTimestamp="2026-02-16 21:54:38 +0000 UTC" firstStartedPulling="2026-02-16 21:54:40.092839555 +0000 UTC m=+3497.911522890" lastFinishedPulling="2026-02-16 21:54:42.500155561 +0000 UTC m=+3500.318838856" observedRunningTime="2026-02-16 21:54:43.147022233 +0000 UTC m=+3500.965705528" watchObservedRunningTime="2026-02-16 21:54:43.154060193 +0000 UTC m=+3500.972743488" Feb 16 21:54:48 crc kubenswrapper[4805]: E0216 21:54:48.609373 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:54:48 crc kubenswrapper[4805]: E0216 21:54:48.624018 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:54:48 crc kubenswrapper[4805]: I0216 21:54:48.669673 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:48 crc kubenswrapper[4805]: I0216 21:54:48.669771 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:48 crc kubenswrapper[4805]: I0216 21:54:48.724869 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:49 crc kubenswrapper[4805]: I0216 21:54:49.250927 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:49 crc kubenswrapper[4805]: I0216 21:54:49.304827 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmmfz"] Feb 16 21:54:51 crc kubenswrapper[4805]: I0216 21:54:51.206828 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lmmfz" podUID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerName="registry-server" containerID="cri-o://9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595" gracePeriod=2 Feb 16 21:54:51 crc kubenswrapper[4805]: E0216 21:54:51.413895 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5721749_5a93_405e_a8a7_43f5b0cd68a1.slice/crio-9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5721749_5a93_405e_a8a7_43f5b0cd68a1.slice/crio-conmon-9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:54:51 crc kubenswrapper[4805]: I0216 21:54:51.858971 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.022877 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-catalog-content\") pod \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.023127 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-utilities\") pod \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.023231 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5mk6\" (UniqueName: \"kubernetes.io/projected/f5721749-5a93-405e-a8a7-43f5b0cd68a1-kube-api-access-z5mk6\") pod \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\" (UID: \"f5721749-5a93-405e-a8a7-43f5b0cd68a1\") " Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.024354 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-utilities" (OuterVolumeSpecName: "utilities") pod "f5721749-5a93-405e-a8a7-43f5b0cd68a1" (UID: "f5721749-5a93-405e-a8a7-43f5b0cd68a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.029902 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5721749-5a93-405e-a8a7-43f5b0cd68a1-kube-api-access-z5mk6" (OuterVolumeSpecName: "kube-api-access-z5mk6") pod "f5721749-5a93-405e-a8a7-43f5b0cd68a1" (UID: "f5721749-5a93-405e-a8a7-43f5b0cd68a1"). InnerVolumeSpecName "kube-api-access-z5mk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.053359 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5721749-5a93-405e-a8a7-43f5b0cd68a1" (UID: "f5721749-5a93-405e-a8a7-43f5b0cd68a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.126352 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.126388 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5mk6\" (UniqueName: \"kubernetes.io/projected/f5721749-5a93-405e-a8a7-43f5b0cd68a1-kube-api-access-z5mk6\") on node \"crc\" DevicePath \"\"" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.126400 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5721749-5a93-405e-a8a7-43f5b0cd68a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.218434 4805 generic.go:334] "Generic (PLEG): container finished" podID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerID="9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595" exitCode=0 Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.218470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmmfz" event={"ID":"f5721749-5a93-405e-a8a7-43f5b0cd68a1","Type":"ContainerDied","Data":"9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595"} Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.218492 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmmfz" event={"ID":"f5721749-5a93-405e-a8a7-43f5b0cd68a1","Type":"ContainerDied","Data":"286fd5d9155003aff69111f4a1b0bd91390dada47dd0dd104599ebed7896f201"} Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.218511 4805 scope.go:117] "RemoveContainer" containerID="9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.218513 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmmfz" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.261016 4805 scope.go:117] "RemoveContainer" containerID="b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.262547 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmmfz"] Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.273165 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmmfz"] Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.304958 4805 scope.go:117] "RemoveContainer" containerID="2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.351175 4805 scope.go:117] "RemoveContainer" containerID="9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595" Feb 16 21:54:52 crc kubenswrapper[4805]: E0216 21:54:52.351672 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595\": container with ID starting with 9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595 not found: ID does not exist" containerID="9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.351751 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595"} err="failed to get container status \"9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595\": rpc error: code = NotFound desc = could not find container \"9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595\": container with ID starting with 9efcbc7eb43ae6eac2ac311d63544eda2056f63b1418c30411b8c03717ecb595 not found: ID does not exist" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.351829 4805 scope.go:117] "RemoveContainer" containerID="b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb" Feb 16 21:54:52 crc kubenswrapper[4805]: E0216 21:54:52.352305 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb\": container with ID starting with b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb not found: ID does not exist" containerID="b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.352331 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb"} err="failed to get container status \"b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb\": rpc error: code = NotFound desc = could not find container \"b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb\": container with ID starting with b6decb2fc76436696b61f0a23485ca1b36a0f6158a5aa9ee447019c2aa01fffb not found: ID does not exist" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.352348 4805 scope.go:117] "RemoveContainer" containerID="2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890" Feb 16 21:54:52 crc kubenswrapper[4805]: E0216 21:54:52.352708 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890\": container with ID starting with 2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890 not found: ID does not exist" containerID="2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890" Feb 16 21:54:52 crc kubenswrapper[4805]: I0216 21:54:52.352747 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890"} err="failed to get container status \"2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890\": rpc error: code = NotFound desc = could not find container \"2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890\": container with ID starting with 2e59fbdfa10efecd8e341b8e35d541d93df4bcafc90a949cff77b0ab6fcd9890 not found: ID does not exist" Feb 16 21:54:53 crc kubenswrapper[4805]: I0216 21:54:53.618938 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" path="/var/lib/kubelet/pods/f5721749-5a93-405e-a8a7-43f5b0cd68a1/volumes" Feb 16 21:55:00 crc kubenswrapper[4805]: E0216 21:55:00.602402 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:55:03 crc kubenswrapper[4805]: E0216 21:55:03.618067 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:55:08 crc kubenswrapper[4805]: I0216 21:55:08.099500 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:55:08 crc kubenswrapper[4805]: I0216 21:55:08.100082 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:55:12 crc kubenswrapper[4805]: E0216 21:55:12.601017 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:55:17 crc kubenswrapper[4805]: E0216 21:55:17.601256 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:55:25 crc kubenswrapper[4805]: E0216 21:55:25.599629 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:55:28 crc kubenswrapper[4805]: E0216 21:55:28.600536 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:55:38 crc kubenswrapper[4805]: I0216 21:55:38.099792 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:55:38 crc kubenswrapper[4805]: I0216 21:55:38.100304 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:55:38 crc kubenswrapper[4805]: I0216 21:55:38.100344 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 21:55:38 crc kubenswrapper[4805]: I0216 21:55:38.101219 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:55:38 crc kubenswrapper[4805]: I0216 21:55:38.101275 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" gracePeriod=600 Feb 16 21:55:38 crc kubenswrapper[4805]: E0216 21:55:38.243466 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:55:38 crc kubenswrapper[4805]: I0216 21:55:38.716379 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" exitCode=0 Feb 16 21:55:38 crc kubenswrapper[4805]: I0216 21:55:38.716432 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0"} Feb 16 21:55:38 crc kubenswrapper[4805]: I0216 21:55:38.716469 4805 scope.go:117] "RemoveContainer" containerID="4bde70584d86d2ced57fa80a14a9cd8751c173f2f956e7ceb2370c0e43f83d0f" Feb 16 21:55:38 crc kubenswrapper[4805]: I0216 21:55:38.717327 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:55:38 crc kubenswrapper[4805]: E0216 21:55:38.717704 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:55:40 crc kubenswrapper[4805]: E0216 21:55:40.600259 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:55:43 crc kubenswrapper[4805]: E0216 21:55:43.607998 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:55:50 crc kubenswrapper[4805]: I0216 21:55:50.598297 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:55:50 crc kubenswrapper[4805]: E0216 21:55:50.599113 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:55:53 crc kubenswrapper[4805]: E0216 21:55:53.602133 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:55:54 crc kubenswrapper[4805]: E0216 21:55:54.599619 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.779537 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zzkdj"] Feb 16 21:55:55 crc kubenswrapper[4805]: E0216 21:55:55.780521 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerName="extract-utilities" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.780539 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerName="extract-utilities" Feb 16 21:55:55 crc kubenswrapper[4805]: E0216 21:55:55.780551 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerName="registry-server" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.780558 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerName="registry-server" Feb 16 21:55:55 crc kubenswrapper[4805]: E0216 21:55:55.780605 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerName="extract-content" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.780613 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerName="extract-content" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.780955 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5721749-5a93-405e-a8a7-43f5b0cd68a1" containerName="registry-server" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.783127 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.795307 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzkdj"] Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.867627 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-utilities\") pod \"community-operators-zzkdj\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.867953 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkwm\" (UniqueName: \"kubernetes.io/projected/8500668c-73b2-4906-814a-792ee99a37c7-kube-api-access-wfkwm\") pod \"community-operators-zzkdj\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.868429 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-catalog-content\") pod \"community-operators-zzkdj\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.971283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-utilities\") pod \"community-operators-zzkdj\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.971579 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfkwm\" (UniqueName: \"kubernetes.io/projected/8500668c-73b2-4906-814a-792ee99a37c7-kube-api-access-wfkwm\") pod \"community-operators-zzkdj\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.971705 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-utilities\") pod \"community-operators-zzkdj\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.971712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-catalog-content\") pod \"community-operators-zzkdj\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:55 crc kubenswrapper[4805]: I0216 21:55:55.972105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-catalog-content\") pod \"community-operators-zzkdj\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:56 crc kubenswrapper[4805]: I0216 21:55:56.014408 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfkwm\" (UniqueName: \"kubernetes.io/projected/8500668c-73b2-4906-814a-792ee99a37c7-kube-api-access-wfkwm\") pod \"community-operators-zzkdj\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:56 crc kubenswrapper[4805]: I0216 21:55:56.101919 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:55:56 crc kubenswrapper[4805]: I0216 21:55:56.589546 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzkdj"] Feb 16 21:55:56 crc kubenswrapper[4805]: W0216 21:55:56.590377 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8500668c_73b2_4906_814a_792ee99a37c7.slice/crio-4c324ec230a51ba55f64811b3e92e2a97f91a4c70401270614736f73a599aa01 WatchSource:0}: Error finding container 4c324ec230a51ba55f64811b3e92e2a97f91a4c70401270614736f73a599aa01: Status 404 returned error can't find the container with id 4c324ec230a51ba55f64811b3e92e2a97f91a4c70401270614736f73a599aa01 Feb 16 21:55:56 crc kubenswrapper[4805]: I0216 21:55:56.942951 4805 generic.go:334] "Generic (PLEG): container finished" podID="8500668c-73b2-4906-814a-792ee99a37c7" containerID="54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f" exitCode=0 Feb 16 21:55:56 crc kubenswrapper[4805]: I0216 21:55:56.943166 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzkdj" event={"ID":"8500668c-73b2-4906-814a-792ee99a37c7","Type":"ContainerDied","Data":"54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f"} Feb 16 21:55:56 crc kubenswrapper[4805]: I0216 21:55:56.944176 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzkdj" event={"ID":"8500668c-73b2-4906-814a-792ee99a37c7","Type":"ContainerStarted","Data":"4c324ec230a51ba55f64811b3e92e2a97f91a4c70401270614736f73a599aa01"} Feb 16 21:55:58 crc kubenswrapper[4805]: I0216 21:55:58.965380 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzkdj" event={"ID":"8500668c-73b2-4906-814a-792ee99a37c7","Type":"ContainerStarted","Data":"f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155"} Feb 16 21:55:59 crc kubenswrapper[4805]: I0216 21:55:59.980047 4805 generic.go:334] "Generic (PLEG): container finished" podID="8500668c-73b2-4906-814a-792ee99a37c7" containerID="f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155" exitCode=0 Feb 16 21:55:59 crc kubenswrapper[4805]: I0216 21:55:59.980170 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzkdj" event={"ID":"8500668c-73b2-4906-814a-792ee99a37c7","Type":"ContainerDied","Data":"f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155"} Feb 16 21:56:00 crc kubenswrapper[4805]: I0216 21:56:00.999399 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzkdj" event={"ID":"8500668c-73b2-4906-814a-792ee99a37c7","Type":"ContainerStarted","Data":"d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4"} Feb 16 21:56:01 crc kubenswrapper[4805]: I0216 21:56:01.034875 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zzkdj" podStartSLOduration=2.58935739 podStartE2EDuration="6.034855918s" podCreationTimestamp="2026-02-16 21:55:55 +0000 UTC" firstStartedPulling="2026-02-16 21:55:56.945322545 +0000 UTC m=+3574.764005840" lastFinishedPulling="2026-02-16 21:56:00.390821023 +0000 UTC m=+3578.209504368" observedRunningTime="2026-02-16 21:56:01.028779054 +0000 UTC m=+3578.847462349" watchObservedRunningTime="2026-02-16 21:56:01.034855918 +0000 UTC m=+3578.853539213" Feb 16 21:56:01 crc kubenswrapper[4805]: I0216 21:56:01.597988 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:56:01 crc kubenswrapper[4805]: E0216 21:56:01.598467 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:56:06 crc kubenswrapper[4805]: I0216 21:56:06.102901 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:56:06 crc kubenswrapper[4805]: I0216 21:56:06.103426 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:56:06 crc kubenswrapper[4805]: I0216 21:56:06.164979 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:56:06 crc kubenswrapper[4805]: E0216 21:56:06.601123 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:56:07 crc kubenswrapper[4805]: I0216 21:56:07.116616 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:56:07 crc kubenswrapper[4805]: I0216 21:56:07.168899 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zzkdj"] Feb 16 21:56:07 crc kubenswrapper[4805]: E0216 21:56:07.599341 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:56:09 crc kubenswrapper[4805]: I0216 21:56:09.078082 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zzkdj" podUID="8500668c-73b2-4906-814a-792ee99a37c7" containerName="registry-server" containerID="cri-o://d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4" gracePeriod=2 Feb 16 21:56:09 crc kubenswrapper[4805]: I0216 21:56:09.985444 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.071867 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-catalog-content\") pod \"8500668c-73b2-4906-814a-792ee99a37c7\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.072279 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfkwm\" (UniqueName: \"kubernetes.io/projected/8500668c-73b2-4906-814a-792ee99a37c7-kube-api-access-wfkwm\") pod \"8500668c-73b2-4906-814a-792ee99a37c7\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.072514 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-utilities\") pod \"8500668c-73b2-4906-814a-792ee99a37c7\" (UID: \"8500668c-73b2-4906-814a-792ee99a37c7\") " Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.075041 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-utilities" (OuterVolumeSpecName: "utilities") pod "8500668c-73b2-4906-814a-792ee99a37c7" (UID: "8500668c-73b2-4906-814a-792ee99a37c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.080784 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8500668c-73b2-4906-814a-792ee99a37c7-kube-api-access-wfkwm" (OuterVolumeSpecName: "kube-api-access-wfkwm") pod "8500668c-73b2-4906-814a-792ee99a37c7" (UID: "8500668c-73b2-4906-814a-792ee99a37c7"). InnerVolumeSpecName "kube-api-access-wfkwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.090659 4805 generic.go:334] "Generic (PLEG): container finished" podID="8500668c-73b2-4906-814a-792ee99a37c7" containerID="d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4" exitCode=0 Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.090753 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzkdj" event={"ID":"8500668c-73b2-4906-814a-792ee99a37c7","Type":"ContainerDied","Data":"d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4"} Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.090786 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzkdj" event={"ID":"8500668c-73b2-4906-814a-792ee99a37c7","Type":"ContainerDied","Data":"4c324ec230a51ba55f64811b3e92e2a97f91a4c70401270614736f73a599aa01"} Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.090806 4805 scope.go:117] "RemoveContainer" containerID="d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.090977 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzkdj" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.166814 4805 scope.go:117] "RemoveContainer" containerID="f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.177463 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.177495 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfkwm\" (UniqueName: \"kubernetes.io/projected/8500668c-73b2-4906-814a-792ee99a37c7-kube-api-access-wfkwm\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.195183 4805 scope.go:117] "RemoveContainer" containerID="54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.283943 4805 scope.go:117] "RemoveContainer" containerID="d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4" Feb 16 21:56:10 crc kubenswrapper[4805]: E0216 21:56:10.285157 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4\": container with ID starting with d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4 not found: ID does not exist" containerID="d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.285207 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4"} err="failed to get container status \"d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4\": rpc error: code = NotFound desc = could not find container \"d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4\": container with ID starting with d7ae537942ac2b997e99f36138adc791c93065057fb044d7539be750d41743d4 not found: ID does not exist" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.285240 4805 scope.go:117] "RemoveContainer" containerID="f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155" Feb 16 21:56:10 crc kubenswrapper[4805]: E0216 21:56:10.287262 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155\": container with ID starting with f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155 not found: ID does not exist" containerID="f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.287306 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155"} err="failed to get container status \"f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155\": rpc error: code = NotFound desc = could not find container \"f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155\": container with ID starting with f39f6b672488b2c4e4c6947b84e85d4315396acb8ec78193775a800c14912155 not found: ID does not exist" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.287336 4805 scope.go:117] "RemoveContainer" containerID="54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f" Feb 16 21:56:10 crc kubenswrapper[4805]: E0216 21:56:10.290875 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f\": container with ID starting with 54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f not found: ID does not exist" containerID="54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.290929 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f"} err="failed to get container status \"54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f\": rpc error: code = NotFound desc = could not find container \"54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f\": container with ID starting with 54c56afa901a6f07f93f18d0ede71ba8b0b8f2337a871f6e0f77cd359688504f not found: ID does not exist" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.354927 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8500668c-73b2-4906-814a-792ee99a37c7" (UID: "8500668c-73b2-4906-814a-792ee99a37c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.382691 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8500668c-73b2-4906-814a-792ee99a37c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.424593 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zzkdj"] Feb 16 21:56:10 crc kubenswrapper[4805]: I0216 21:56:10.435588 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zzkdj"] Feb 16 21:56:11 crc kubenswrapper[4805]: I0216 21:56:11.611217 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8500668c-73b2-4906-814a-792ee99a37c7" path="/var/lib/kubelet/pods/8500668c-73b2-4906-814a-792ee99a37c7/volumes" Feb 16 21:56:16 crc kubenswrapper[4805]: I0216 21:56:16.598011 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:56:16 crc kubenswrapper[4805]: E0216 21:56:16.598755 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:56:20 crc kubenswrapper[4805]: E0216 21:56:20.602341 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:56:21 crc kubenswrapper[4805]: E0216 21:56:21.599877 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:56:29 crc kubenswrapper[4805]: I0216 21:56:29.000001 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7688d557bc-2jgzd" podUID="95ea5d76-aedb-4a0a-a03d-fdc9140265e4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 21:56:31 crc kubenswrapper[4805]: I0216 21:56:31.599687 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:56:31 crc kubenswrapper[4805]: E0216 21:56:31.600334 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:56:32 crc kubenswrapper[4805]: E0216 21:56:32.599706 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:56:33 crc kubenswrapper[4805]: E0216 21:56:33.609301 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:56:42 crc kubenswrapper[4805]: I0216 21:56:42.597601 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:56:42 crc kubenswrapper[4805]: E0216 21:56:42.598398 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:56:46 crc kubenswrapper[4805]: I0216 21:56:46.600776 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:56:46 crc kubenswrapper[4805]: E0216 21:56:46.725769 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:56:46 crc kubenswrapper[4805]: E0216 21:56:46.725819 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 21:56:46 crc kubenswrapper[4805]: E0216 21:56:46.725917 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:56:46 crc kubenswrapper[4805]: E0216 21:56:46.727102 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:56:47 crc kubenswrapper[4805]: E0216 21:56:47.599780 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:56:57 crc kubenswrapper[4805]: I0216 21:56:57.598469 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:56:57 crc kubenswrapper[4805]: E0216 21:56:57.599454 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:56:58 crc kubenswrapper[4805]: E0216 21:56:58.600275 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:56:59 crc kubenswrapper[4805]: E0216 21:56:59.602450 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:57:09 crc kubenswrapper[4805]: E0216 21:57:09.725935 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:57:09 crc kubenswrapper[4805]: E0216 21:57:09.726504 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 21:57:09 crc kubenswrapper[4805]: E0216 21:57:09.726651 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 21:57:09 crc kubenswrapper[4805]: E0216 21:57:09.727877 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:57:10 crc kubenswrapper[4805]: I0216 21:57:10.598659 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:57:10 crc kubenswrapper[4805]: E0216 21:57:10.599045 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:57:11 crc kubenswrapper[4805]: E0216 21:57:11.602003 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:57:22 crc kubenswrapper[4805]: E0216 21:57:22.600874 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:57:23 crc kubenswrapper[4805]: I0216 21:57:23.614453 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:57:23 crc kubenswrapper[4805]: E0216 21:57:23.614899 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:57:23 crc kubenswrapper[4805]: E0216 21:57:23.618187 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:57:33 crc kubenswrapper[4805]: E0216 21:57:33.610738 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:57:35 crc kubenswrapper[4805]: I0216 21:57:35.600056 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:57:35 crc kubenswrapper[4805]: E0216 21:57:35.600555 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:57:35 crc kubenswrapper[4805]: E0216 21:57:35.601639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:57:47 crc kubenswrapper[4805]: E0216 21:57:47.599831 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:57:47 crc kubenswrapper[4805]: E0216 21:57:47.599832 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:57:49 crc kubenswrapper[4805]: I0216 21:57:49.597595 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:57:49 crc kubenswrapper[4805]: E0216 21:57:49.598367 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:58:00 crc kubenswrapper[4805]: E0216 21:58:00.607753 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:58:01 crc kubenswrapper[4805]: I0216 21:58:01.599563 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:58:01 crc kubenswrapper[4805]: E0216 21:58:01.600205 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:58:01 crc kubenswrapper[4805]: E0216 21:58:01.603874 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:58:12 crc kubenswrapper[4805]: E0216 21:58:12.602173 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:58:16 crc kubenswrapper[4805]: I0216 21:58:16.598048 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:58:16 crc kubenswrapper[4805]: E0216 21:58:16.599453 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:58:16 crc kubenswrapper[4805]: E0216 21:58:16.600849 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:58:23 crc kubenswrapper[4805]: E0216 21:58:23.610265 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:58:28 crc kubenswrapper[4805]: E0216 21:58:28.600937 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:58:31 crc kubenswrapper[4805]: I0216 21:58:31.598340 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:58:31 crc kubenswrapper[4805]: E0216 21:58:31.599350 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:58:36 crc kubenswrapper[4805]: E0216 21:58:36.601047 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:58:41 crc kubenswrapper[4805]: E0216 21:58:41.601186 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:58:45 crc kubenswrapper[4805]: I0216 21:58:45.598331 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:58:45 crc kubenswrapper[4805]: E0216 21:58:45.599032 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:58:49 crc kubenswrapper[4805]: E0216 21:58:49.601503 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:58:56 crc kubenswrapper[4805]: E0216 21:58:56.600625 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:58:58 crc kubenswrapper[4805]: I0216 21:58:58.598649 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:58:58 crc kubenswrapper[4805]: E0216 21:58:58.599638 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:59:04 crc kubenswrapper[4805]: E0216 21:59:04.602529 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:59:09 crc kubenswrapper[4805]: I0216 21:59:09.598452 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:59:09 crc kubenswrapper[4805]: E0216 21:59:09.599365 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:59:09 crc kubenswrapper[4805]: E0216 21:59:09.602021 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:59:17 crc kubenswrapper[4805]: E0216 21:59:17.602854 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:59:21 crc kubenswrapper[4805]: I0216 21:59:21.598407 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:59:21 crc kubenswrapper[4805]: E0216 21:59:21.599563 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:59:22 crc kubenswrapper[4805]: E0216 21:59:22.601194 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:59:23 crc kubenswrapper[4805]: I0216 21:59:23.841096 4805 generic.go:334] "Generic (PLEG): container finished" podID="54cd5193-d167-4eaa-86bf-3e5ca7a7703a" containerID="d65ded99b816897766e2c6f9a20b67de600d2bd4279b520723ab5ce3629936c1" exitCode=2 Feb 16 21:59:23 crc kubenswrapper[4805]: I0216 21:59:23.841235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" event={"ID":"54cd5193-d167-4eaa-86bf-3e5ca7a7703a","Type":"ContainerDied","Data":"d65ded99b816897766e2c6f9a20b67de600d2bd4279b520723ab5ce3629936c1"} Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.284361 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.331782 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-ssh-key-openstack-edpm-ipam\") pod \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.331865 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-inventory\") pod \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.332006 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdsq8\" (UniqueName: \"kubernetes.io/projected/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-kube-api-access-zdsq8\") pod \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\" (UID: \"54cd5193-d167-4eaa-86bf-3e5ca7a7703a\") " Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.339600 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-kube-api-access-zdsq8" (OuterVolumeSpecName: "kube-api-access-zdsq8") pod "54cd5193-d167-4eaa-86bf-3e5ca7a7703a" (UID: "54cd5193-d167-4eaa-86bf-3e5ca7a7703a"). InnerVolumeSpecName "kube-api-access-zdsq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.375076 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "54cd5193-d167-4eaa-86bf-3e5ca7a7703a" (UID: "54cd5193-d167-4eaa-86bf-3e5ca7a7703a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.381809 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-inventory" (OuterVolumeSpecName: "inventory") pod "54cd5193-d167-4eaa-86bf-3e5ca7a7703a" (UID: "54cd5193-d167-4eaa-86bf-3e5ca7a7703a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.435830 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdsq8\" (UniqueName: \"kubernetes.io/projected/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-kube-api-access-zdsq8\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.435871 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.435885 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54cd5193-d167-4eaa-86bf-3e5ca7a7703a-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.866571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" event={"ID":"54cd5193-d167-4eaa-86bf-3e5ca7a7703a","Type":"ContainerDied","Data":"fd517d4766095eb9dabafe186fb207fa401eef7f8f63685e451bb855b3cdd50a"} Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.866940 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd517d4766095eb9dabafe186fb207fa401eef7f8f63685e451bb855b3cdd50a" Feb 16 21:59:25 crc kubenswrapper[4805]: I0216 21:59:25.866655 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4v99m" Feb 16 21:59:31 crc kubenswrapper[4805]: E0216 21:59:31.600558 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:59:34 crc kubenswrapper[4805]: E0216 21:59:34.600138 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:59:36 crc kubenswrapper[4805]: I0216 21:59:36.598370 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:59:36 crc kubenswrapper[4805]: E0216 21:59:36.599272 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:59:44 crc kubenswrapper[4805]: E0216 21:59:44.600052 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 21:59:47 crc kubenswrapper[4805]: I0216 21:59:47.599894 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 21:59:47 crc kubenswrapper[4805]: E0216 21:59:47.600856 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 21:59:47 crc kubenswrapper[4805]: E0216 21:59:47.601344 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 21:59:57 crc kubenswrapper[4805]: E0216 21:59:57.601060 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.165251 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm"] Feb 16 22:00:00 crc kubenswrapper[4805]: E0216 22:00:00.166427 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8500668c-73b2-4906-814a-792ee99a37c7" containerName="registry-server" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.166444 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8500668c-73b2-4906-814a-792ee99a37c7" containerName="registry-server" Feb 16 22:00:00 crc kubenswrapper[4805]: E0216 22:00:00.166475 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8500668c-73b2-4906-814a-792ee99a37c7" containerName="extract-content" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.166484 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8500668c-73b2-4906-814a-792ee99a37c7" containerName="extract-content" Feb 16 22:00:00 crc kubenswrapper[4805]: E0216 22:00:00.166505 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54cd5193-d167-4eaa-86bf-3e5ca7a7703a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.166514 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="54cd5193-d167-4eaa-86bf-3e5ca7a7703a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:00:00 crc kubenswrapper[4805]: E0216 22:00:00.166541 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8500668c-73b2-4906-814a-792ee99a37c7" containerName="extract-utilities" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.166550 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8500668c-73b2-4906-814a-792ee99a37c7" containerName="extract-utilities" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.166955 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8500668c-73b2-4906-814a-792ee99a37c7" containerName="registry-server" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.166975 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="54cd5193-d167-4eaa-86bf-3e5ca7a7703a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.168092 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.177408 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm"] Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.201470 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.201531 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.330407 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d1922-95e4-456f-9fad-32e8f6f493da-secret-volume\") pod \"collect-profiles-29521320-m6qcm\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.330547 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxhs9\" (UniqueName: \"kubernetes.io/projected/181d1922-95e4-456f-9fad-32e8f6f493da-kube-api-access-nxhs9\") pod \"collect-profiles-29521320-m6qcm\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.330600 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d1922-95e4-456f-9fad-32e8f6f493da-config-volume\") pod \"collect-profiles-29521320-m6qcm\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.433556 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d1922-95e4-456f-9fad-32e8f6f493da-config-volume\") pod \"collect-profiles-29521320-m6qcm\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.433768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d1922-95e4-456f-9fad-32e8f6f493da-secret-volume\") pod \"collect-profiles-29521320-m6qcm\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.433841 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxhs9\" (UniqueName: \"kubernetes.io/projected/181d1922-95e4-456f-9fad-32e8f6f493da-kube-api-access-nxhs9\") pod \"collect-profiles-29521320-m6qcm\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.434750 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d1922-95e4-456f-9fad-32e8f6f493da-config-volume\") pod \"collect-profiles-29521320-m6qcm\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.451946 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxhs9\" (UniqueName: \"kubernetes.io/projected/181d1922-95e4-456f-9fad-32e8f6f493da-kube-api-access-nxhs9\") pod \"collect-profiles-29521320-m6qcm\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.452000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d1922-95e4-456f-9fad-32e8f6f493da-secret-volume\") pod \"collect-profiles-29521320-m6qcm\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: I0216 22:00:00.529654 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:00 crc kubenswrapper[4805]: E0216 22:00:00.600236 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:00:01 crc kubenswrapper[4805]: I0216 22:00:01.030815 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm"] Feb 16 22:00:01 crc kubenswrapper[4805]: W0216 22:00:01.039187 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod181d1922_95e4_456f_9fad_32e8f6f493da.slice/crio-839e35bc9af772d59040ae9f4f561709b794ef286e37b44a30793774a15c8f8d WatchSource:0}: Error finding container 839e35bc9af772d59040ae9f4f561709b794ef286e37b44a30793774a15c8f8d: Status 404 returned error can't find the container with id 839e35bc9af772d59040ae9f4f561709b794ef286e37b44a30793774a15c8f8d Feb 16 22:00:01 crc kubenswrapper[4805]: I0216 22:00:01.288672 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" event={"ID":"181d1922-95e4-456f-9fad-32e8f6f493da","Type":"ContainerStarted","Data":"839e35bc9af772d59040ae9f4f561709b794ef286e37b44a30793774a15c8f8d"} Feb 16 22:00:01 crc kubenswrapper[4805]: I0216 22:00:01.309985 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" podStartSLOduration=1.309965485 podStartE2EDuration="1.309965485s" podCreationTimestamp="2026-02-16 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:01.303909433 +0000 UTC m=+3819.122592728" watchObservedRunningTime="2026-02-16 22:00:01.309965485 +0000 UTC m=+3819.128648780" Feb 16 22:00:01 crc kubenswrapper[4805]: I0216 22:00:01.597918 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 22:00:01 crc kubenswrapper[4805]: E0216 22:00:01.598486 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:00:02 crc kubenswrapper[4805]: I0216 22:00:02.299259 4805 generic.go:334] "Generic (PLEG): container finished" podID="181d1922-95e4-456f-9fad-32e8f6f493da" containerID="2f3fac78b5676094cb20b159924f7dd9bc88776cf5e8af6f4acc74f6513d1af3" exitCode=0 Feb 16 22:00:02 crc kubenswrapper[4805]: I0216 22:00:02.299318 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" event={"ID":"181d1922-95e4-456f-9fad-32e8f6f493da","Type":"ContainerDied","Data":"2f3fac78b5676094cb20b159924f7dd9bc88776cf5e8af6f4acc74f6513d1af3"} Feb 16 22:00:03 crc kubenswrapper[4805]: I0216 22:00:03.756478 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:03 crc kubenswrapper[4805]: I0216 22:00:03.935789 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d1922-95e4-456f-9fad-32e8f6f493da-secret-volume\") pod \"181d1922-95e4-456f-9fad-32e8f6f493da\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " Feb 16 22:00:03 crc kubenswrapper[4805]: I0216 22:00:03.936108 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxhs9\" (UniqueName: \"kubernetes.io/projected/181d1922-95e4-456f-9fad-32e8f6f493da-kube-api-access-nxhs9\") pod \"181d1922-95e4-456f-9fad-32e8f6f493da\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " Feb 16 22:00:03 crc kubenswrapper[4805]: I0216 22:00:03.936469 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d1922-95e4-456f-9fad-32e8f6f493da-config-volume\") pod \"181d1922-95e4-456f-9fad-32e8f6f493da\" (UID: \"181d1922-95e4-456f-9fad-32e8f6f493da\") " Feb 16 22:00:03 crc kubenswrapper[4805]: I0216 22:00:03.937252 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/181d1922-95e4-456f-9fad-32e8f6f493da-config-volume" (OuterVolumeSpecName: "config-volume") pod "181d1922-95e4-456f-9fad-32e8f6f493da" (UID: "181d1922-95e4-456f-9fad-32e8f6f493da"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:03 crc kubenswrapper[4805]: I0216 22:00:03.943019 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181d1922-95e4-456f-9fad-32e8f6f493da-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "181d1922-95e4-456f-9fad-32e8f6f493da" (UID: "181d1922-95e4-456f-9fad-32e8f6f493da"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:03 crc kubenswrapper[4805]: I0216 22:00:03.943045 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181d1922-95e4-456f-9fad-32e8f6f493da-kube-api-access-nxhs9" (OuterVolumeSpecName: "kube-api-access-nxhs9") pod "181d1922-95e4-456f-9fad-32e8f6f493da" (UID: "181d1922-95e4-456f-9fad-32e8f6f493da"). InnerVolumeSpecName "kube-api-access-nxhs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:04 crc kubenswrapper[4805]: I0216 22:00:04.039665 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d1922-95e4-456f-9fad-32e8f6f493da-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:04 crc kubenswrapper[4805]: I0216 22:00:04.039702 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d1922-95e4-456f-9fad-32e8f6f493da-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:04 crc kubenswrapper[4805]: I0216 22:00:04.039712 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxhs9\" (UniqueName: \"kubernetes.io/projected/181d1922-95e4-456f-9fad-32e8f6f493da-kube-api-access-nxhs9\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:04 crc kubenswrapper[4805]: I0216 22:00:04.323109 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" event={"ID":"181d1922-95e4-456f-9fad-32e8f6f493da","Type":"ContainerDied","Data":"839e35bc9af772d59040ae9f4f561709b794ef286e37b44a30793774a15c8f8d"} Feb 16 22:00:04 crc kubenswrapper[4805]: I0216 22:00:04.323159 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="839e35bc9af772d59040ae9f4f561709b794ef286e37b44a30793774a15c8f8d" Feb 16 22:00:04 crc kubenswrapper[4805]: I0216 22:00:04.323230 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-m6qcm" Feb 16 22:00:04 crc kubenswrapper[4805]: I0216 22:00:04.396236 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx"] Feb 16 22:00:04 crc kubenswrapper[4805]: I0216 22:00:04.410315 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-4vspx"] Feb 16 22:00:05 crc kubenswrapper[4805]: I0216 22:00:05.610408 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b3f42a1-7bfb-46d0-9cca-3de49e378aa8" path="/var/lib/kubelet/pods/6b3f42a1-7bfb-46d0-9cca-3de49e378aa8/volumes" Feb 16 22:00:11 crc kubenswrapper[4805]: E0216 22:00:11.600617 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:00:13 crc kubenswrapper[4805]: I0216 22:00:13.605250 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 22:00:13 crc kubenswrapper[4805]: E0216 22:00:13.606065 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:00:14 crc kubenswrapper[4805]: E0216 22:00:14.601030 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:00:23 crc kubenswrapper[4805]: E0216 22:00:23.611919 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:00:27 crc kubenswrapper[4805]: E0216 22:00:27.599775 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:00:28 crc kubenswrapper[4805]: I0216 22:00:28.598007 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 22:00:28 crc kubenswrapper[4805]: E0216 22:00:28.598691 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:00:35 crc kubenswrapper[4805]: E0216 22:00:35.601072 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:00:37 crc kubenswrapper[4805]: I0216 22:00:37.991733 4805 scope.go:117] "RemoveContainer" containerID="c7dda31eddc9cc163d3c47e7b44967fc4364b95ed92c7547082337f67eb8836c" Feb 16 22:00:38 crc kubenswrapper[4805]: E0216 22:00:38.599233 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:00:42 crc kubenswrapper[4805]: I0216 22:00:42.597866 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 22:00:43 crc kubenswrapper[4805]: I0216 22:00:43.736103 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"ada81ab054aab5f35ec26766a68b4422744f5fd713c0f0116d56fc3c7eb55664"} Feb 16 22:00:46 crc kubenswrapper[4805]: E0216 22:00:46.600128 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:00:49 crc kubenswrapper[4805]: E0216 22:00:49.600753 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.157871 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29521321-x6zk9"] Feb 16 22:01:00 crc kubenswrapper[4805]: E0216 22:01:00.160025 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181d1922-95e4-456f-9fad-32e8f6f493da" containerName="collect-profiles" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.160145 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="181d1922-95e4-456f-9fad-32e8f6f493da" containerName="collect-profiles" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.160503 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="181d1922-95e4-456f-9fad-32e8f6f493da" containerName="collect-profiles" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.161634 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.177232 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521321-x6zk9"] Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.333403 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-combined-ca-bundle\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.333574 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7rwv\" (UniqueName: \"kubernetes.io/projected/cd502f9b-caad-477e-8f1e-82567d04366f-kube-api-access-v7rwv\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.333861 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-config-data\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.334116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-fernet-keys\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.436647 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-config-data\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.436796 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-fernet-keys\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.436838 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-combined-ca-bundle\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.436928 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7rwv\" (UniqueName: \"kubernetes.io/projected/cd502f9b-caad-477e-8f1e-82567d04366f-kube-api-access-v7rwv\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.442901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-config-data\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.443587 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-fernet-keys\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.445083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-combined-ca-bundle\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.452594 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7rwv\" (UniqueName: \"kubernetes.io/projected/cd502f9b-caad-477e-8f1e-82567d04366f-kube-api-access-v7rwv\") pod \"keystone-cron-29521321-x6zk9\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.485415 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:00 crc kubenswrapper[4805]: E0216 22:01:00.608230 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:01:00 crc kubenswrapper[4805]: I0216 22:01:00.974065 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521321-x6zk9"] Feb 16 22:01:00 crc kubenswrapper[4805]: W0216 22:01:00.984821 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd502f9b_caad_477e_8f1e_82567d04366f.slice/crio-855ec7886748c096c524a6d316c774bbd05ef26b8fa38bb66b6372f45be76391 WatchSource:0}: Error finding container 855ec7886748c096c524a6d316c774bbd05ef26b8fa38bb66b6372f45be76391: Status 404 returned error can't find the container with id 855ec7886748c096c524a6d316c774bbd05ef26b8fa38bb66b6372f45be76391 Feb 16 22:01:01 crc kubenswrapper[4805]: I0216 22:01:01.940819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-x6zk9" event={"ID":"cd502f9b-caad-477e-8f1e-82567d04366f","Type":"ContainerStarted","Data":"4e189cbfcdfacf07687333b0ad0eb775d5756e071289006c48a2d9c46921b4de"} Feb 16 22:01:01 crc kubenswrapper[4805]: I0216 22:01:01.941209 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-x6zk9" event={"ID":"cd502f9b-caad-477e-8f1e-82567d04366f","Type":"ContainerStarted","Data":"855ec7886748c096c524a6d316c774bbd05ef26b8fa38bb66b6372f45be76391"} Feb 16 22:01:01 crc kubenswrapper[4805]: I0216 22:01:01.957201 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29521321-x6zk9" podStartSLOduration=1.957169301 podStartE2EDuration="1.957169301s" podCreationTimestamp="2026-02-16 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:01.956493743 +0000 UTC m=+3879.775177038" watchObservedRunningTime="2026-02-16 22:01:01.957169301 +0000 UTC m=+3879.775852596" Feb 16 22:01:02 crc kubenswrapper[4805]: E0216 22:01:02.601698 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:01:04 crc kubenswrapper[4805]: I0216 22:01:04.975183 4805 generic.go:334] "Generic (PLEG): container finished" podID="cd502f9b-caad-477e-8f1e-82567d04366f" containerID="4e189cbfcdfacf07687333b0ad0eb775d5756e071289006c48a2d9c46921b4de" exitCode=0 Feb 16 22:01:04 crc kubenswrapper[4805]: I0216 22:01:04.975264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-x6zk9" event={"ID":"cd502f9b-caad-477e-8f1e-82567d04366f","Type":"ContainerDied","Data":"4e189cbfcdfacf07687333b0ad0eb775d5756e071289006c48a2d9c46921b4de"} Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.528485 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.608130 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-combined-ca-bundle\") pod \"cd502f9b-caad-477e-8f1e-82567d04366f\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.608187 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-config-data\") pod \"cd502f9b-caad-477e-8f1e-82567d04366f\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.608423 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-fernet-keys\") pod \"cd502f9b-caad-477e-8f1e-82567d04366f\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.608454 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7rwv\" (UniqueName: \"kubernetes.io/projected/cd502f9b-caad-477e-8f1e-82567d04366f-kube-api-access-v7rwv\") pod \"cd502f9b-caad-477e-8f1e-82567d04366f\" (UID: \"cd502f9b-caad-477e-8f1e-82567d04366f\") " Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.614206 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd502f9b-caad-477e-8f1e-82567d04366f-kube-api-access-v7rwv" (OuterVolumeSpecName: "kube-api-access-v7rwv") pod "cd502f9b-caad-477e-8f1e-82567d04366f" (UID: "cd502f9b-caad-477e-8f1e-82567d04366f"). InnerVolumeSpecName "kube-api-access-v7rwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.621110 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cd502f9b-caad-477e-8f1e-82567d04366f" (UID: "cd502f9b-caad-477e-8f1e-82567d04366f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.649893 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd502f9b-caad-477e-8f1e-82567d04366f" (UID: "cd502f9b-caad-477e-8f1e-82567d04366f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.686693 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-config-data" (OuterVolumeSpecName: "config-data") pod "cd502f9b-caad-477e-8f1e-82567d04366f" (UID: "cd502f9b-caad-477e-8f1e-82567d04366f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.712497 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.712528 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.712538 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd502f9b-caad-477e-8f1e-82567d04366f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.712547 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7rwv\" (UniqueName: \"kubernetes.io/projected/cd502f9b-caad-477e-8f1e-82567d04366f-kube-api-access-v7rwv\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.997046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-x6zk9" event={"ID":"cd502f9b-caad-477e-8f1e-82567d04366f","Type":"ContainerDied","Data":"855ec7886748c096c524a6d316c774bbd05ef26b8fa38bb66b6372f45be76391"} Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.997089 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="855ec7886748c096c524a6d316c774bbd05ef26b8fa38bb66b6372f45be76391" Feb 16 22:01:06 crc kubenswrapper[4805]: I0216 22:01:06.997162 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-x6zk9" Feb 16 22:01:11 crc kubenswrapper[4805]: E0216 22:01:11.602154 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:01:13 crc kubenswrapper[4805]: E0216 22:01:13.611090 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:01:22 crc kubenswrapper[4805]: E0216 22:01:22.601109 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:01:24 crc kubenswrapper[4805]: E0216 22:01:24.600415 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:01:24 crc kubenswrapper[4805]: I0216 22:01:24.799507 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qn566"] Feb 16 22:01:24 crc kubenswrapper[4805]: E0216 22:01:24.800103 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd502f9b-caad-477e-8f1e-82567d04366f" containerName="keystone-cron" Feb 16 22:01:24 crc kubenswrapper[4805]: I0216 22:01:24.800124 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd502f9b-caad-477e-8f1e-82567d04366f" containerName="keystone-cron" Feb 16 22:01:24 crc kubenswrapper[4805]: I0216 22:01:24.800442 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd502f9b-caad-477e-8f1e-82567d04366f" containerName="keystone-cron" Feb 16 22:01:24 crc kubenswrapper[4805]: I0216 22:01:24.807445 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:24 crc kubenswrapper[4805]: I0216 22:01:24.817427 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qn566"] Feb 16 22:01:24 crc kubenswrapper[4805]: I0216 22:01:24.964139 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-utilities\") pod \"certified-operators-qn566\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:24 crc kubenswrapper[4805]: I0216 22:01:24.964517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjm97\" (UniqueName: \"kubernetes.io/projected/99ac2568-8835-4dc3-bb14-a951d36a0552-kube-api-access-vjm97\") pod \"certified-operators-qn566\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:24 crc kubenswrapper[4805]: I0216 22:01:24.964557 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-catalog-content\") pod \"certified-operators-qn566\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:25 crc kubenswrapper[4805]: I0216 22:01:25.066995 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-utilities\") pod \"certified-operators-qn566\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:25 crc kubenswrapper[4805]: I0216 22:01:25.067097 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjm97\" (UniqueName: \"kubernetes.io/projected/99ac2568-8835-4dc3-bb14-a951d36a0552-kube-api-access-vjm97\") pod \"certified-operators-qn566\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:25 crc kubenswrapper[4805]: I0216 22:01:25.067140 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-catalog-content\") pod \"certified-operators-qn566\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:25 crc kubenswrapper[4805]: I0216 22:01:25.067622 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-utilities\") pod \"certified-operators-qn566\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:25 crc kubenswrapper[4805]: I0216 22:01:25.067824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-catalog-content\") pod \"certified-operators-qn566\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:25 crc kubenswrapper[4805]: I0216 22:01:25.088970 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjm97\" (UniqueName: \"kubernetes.io/projected/99ac2568-8835-4dc3-bb14-a951d36a0552-kube-api-access-vjm97\") pod \"certified-operators-qn566\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:25 crc kubenswrapper[4805]: I0216 22:01:25.132370 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:25 crc kubenswrapper[4805]: I0216 22:01:25.719197 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qn566"] Feb 16 22:01:26 crc kubenswrapper[4805]: I0216 22:01:26.242052 4805 generic.go:334] "Generic (PLEG): container finished" podID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerID="e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e" exitCode=0 Feb 16 22:01:26 crc kubenswrapper[4805]: I0216 22:01:26.242114 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn566" event={"ID":"99ac2568-8835-4dc3-bb14-a951d36a0552","Type":"ContainerDied","Data":"e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e"} Feb 16 22:01:26 crc kubenswrapper[4805]: I0216 22:01:26.242333 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn566" event={"ID":"99ac2568-8835-4dc3-bb14-a951d36a0552","Type":"ContainerStarted","Data":"f910a2b3fb28b5d693953d0bb93964a34f352e93453bc12be977589cbb558aaf"} Feb 16 22:01:27 crc kubenswrapper[4805]: I0216 22:01:27.259134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn566" event={"ID":"99ac2568-8835-4dc3-bb14-a951d36a0552","Type":"ContainerStarted","Data":"f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589"} Feb 16 22:01:28 crc kubenswrapper[4805]: I0216 22:01:28.273554 4805 generic.go:334] "Generic (PLEG): container finished" podID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerID="f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589" exitCode=0 Feb 16 22:01:28 crc kubenswrapper[4805]: I0216 22:01:28.273643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn566" event={"ID":"99ac2568-8835-4dc3-bb14-a951d36a0552","Type":"ContainerDied","Data":"f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589"} Feb 16 22:01:29 crc kubenswrapper[4805]: I0216 22:01:29.287179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn566" event={"ID":"99ac2568-8835-4dc3-bb14-a951d36a0552","Type":"ContainerStarted","Data":"c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e"} Feb 16 22:01:29 crc kubenswrapper[4805]: I0216 22:01:29.307694 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qn566" podStartSLOduration=2.869499291 podStartE2EDuration="5.30767367s" podCreationTimestamp="2026-02-16 22:01:24 +0000 UTC" firstStartedPulling="2026-02-16 22:01:26.244292192 +0000 UTC m=+3904.062975487" lastFinishedPulling="2026-02-16 22:01:28.682466571 +0000 UTC m=+3906.501149866" observedRunningTime="2026-02-16 22:01:29.306107348 +0000 UTC m=+3907.124790643" watchObservedRunningTime="2026-02-16 22:01:29.30767367 +0000 UTC m=+3907.126356965" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.170915 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-68blv"] Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.173810 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.187552 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68blv"] Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.253583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ssb\" (UniqueName: \"kubernetes.io/projected/4b51cc4a-8c49-465e-94df-030e188d1b29-kube-api-access-66ssb\") pod \"redhat-operators-68blv\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.254129 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-catalog-content\") pod \"redhat-operators-68blv\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.254243 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-utilities\") pod \"redhat-operators-68blv\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.356072 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66ssb\" (UniqueName: \"kubernetes.io/projected/4b51cc4a-8c49-465e-94df-030e188d1b29-kube-api-access-66ssb\") pod \"redhat-operators-68blv\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.356441 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-catalog-content\") pod \"redhat-operators-68blv\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.356484 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-utilities\") pod \"redhat-operators-68blv\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.356932 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-catalog-content\") pod \"redhat-operators-68blv\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.356969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-utilities\") pod \"redhat-operators-68blv\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.523545 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66ssb\" (UniqueName: \"kubernetes.io/projected/4b51cc4a-8c49-465e-94df-030e188d1b29-kube-api-access-66ssb\") pod \"redhat-operators-68blv\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:32 crc kubenswrapper[4805]: I0216 22:01:32.545317 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:33 crc kubenswrapper[4805]: W0216 22:01:33.110910 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b51cc4a_8c49_465e_94df_030e188d1b29.slice/crio-cc13f895a52c06fbd88af14cd544827a0423733fd939b4701725d6873eeecec7 WatchSource:0}: Error finding container cc13f895a52c06fbd88af14cd544827a0423733fd939b4701725d6873eeecec7: Status 404 returned error can't find the container with id cc13f895a52c06fbd88af14cd544827a0423733fd939b4701725d6873eeecec7 Feb 16 22:01:33 crc kubenswrapper[4805]: I0216 22:01:33.117403 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68blv"] Feb 16 22:01:33 crc kubenswrapper[4805]: I0216 22:01:33.343764 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68blv" event={"ID":"4b51cc4a-8c49-465e-94df-030e188d1b29","Type":"ContainerStarted","Data":"4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1"} Feb 16 22:01:33 crc kubenswrapper[4805]: I0216 22:01:33.346628 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68blv" event={"ID":"4b51cc4a-8c49-465e-94df-030e188d1b29","Type":"ContainerStarted","Data":"cc13f895a52c06fbd88af14cd544827a0423733fd939b4701725d6873eeecec7"} Feb 16 22:01:34 crc kubenswrapper[4805]: I0216 22:01:34.358447 4805 generic.go:334] "Generic (PLEG): container finished" podID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerID="4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1" exitCode=0 Feb 16 22:01:34 crc kubenswrapper[4805]: I0216 22:01:34.358662 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68blv" event={"ID":"4b51cc4a-8c49-465e-94df-030e188d1b29","Type":"ContainerDied","Data":"4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1"} Feb 16 22:01:34 crc kubenswrapper[4805]: I0216 22:01:34.358785 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68blv" event={"ID":"4b51cc4a-8c49-465e-94df-030e188d1b29","Type":"ContainerStarted","Data":"de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba"} Feb 16 22:01:35 crc kubenswrapper[4805]: I0216 22:01:35.133640 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:35 crc kubenswrapper[4805]: I0216 22:01:35.133915 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:35 crc kubenswrapper[4805]: I0216 22:01:35.178934 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:35 crc kubenswrapper[4805]: I0216 22:01:35.413154 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:36 crc kubenswrapper[4805]: E0216 22:01:36.599964 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:01:36 crc kubenswrapper[4805]: E0216 22:01:36.599961 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:01:37 crc kubenswrapper[4805]: I0216 22:01:37.360891 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qn566"] Feb 16 22:01:37 crc kubenswrapper[4805]: I0216 22:01:37.394559 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qn566" podUID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerName="registry-server" containerID="cri-o://c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e" gracePeriod=2 Feb 16 22:01:37 crc kubenswrapper[4805]: I0216 22:01:37.996616 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.133985 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-catalog-content\") pod \"99ac2568-8835-4dc3-bb14-a951d36a0552\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.134298 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-utilities\") pod \"99ac2568-8835-4dc3-bb14-a951d36a0552\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.134496 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjm97\" (UniqueName: \"kubernetes.io/projected/99ac2568-8835-4dc3-bb14-a951d36a0552-kube-api-access-vjm97\") pod \"99ac2568-8835-4dc3-bb14-a951d36a0552\" (UID: \"99ac2568-8835-4dc3-bb14-a951d36a0552\") " Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.134955 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-utilities" (OuterVolumeSpecName: "utilities") pod "99ac2568-8835-4dc3-bb14-a951d36a0552" (UID: "99ac2568-8835-4dc3-bb14-a951d36a0552"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.136219 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.141321 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ac2568-8835-4dc3-bb14-a951d36a0552-kube-api-access-vjm97" (OuterVolumeSpecName: "kube-api-access-vjm97") pod "99ac2568-8835-4dc3-bb14-a951d36a0552" (UID: "99ac2568-8835-4dc3-bb14-a951d36a0552"). InnerVolumeSpecName "kube-api-access-vjm97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.198414 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99ac2568-8835-4dc3-bb14-a951d36a0552" (UID: "99ac2568-8835-4dc3-bb14-a951d36a0552"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.238734 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjm97\" (UniqueName: \"kubernetes.io/projected/99ac2568-8835-4dc3-bb14-a951d36a0552-kube-api-access-vjm97\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.238776 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ac2568-8835-4dc3-bb14-a951d36a0552-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.409058 4805 generic.go:334] "Generic (PLEG): container finished" podID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerID="c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e" exitCode=0 Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.409096 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn566" event={"ID":"99ac2568-8835-4dc3-bb14-a951d36a0552","Type":"ContainerDied","Data":"c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e"} Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.409125 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn566" event={"ID":"99ac2568-8835-4dc3-bb14-a951d36a0552","Type":"ContainerDied","Data":"f910a2b3fb28b5d693953d0bb93964a34f352e93453bc12be977589cbb558aaf"} Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.409143 4805 scope.go:117] "RemoveContainer" containerID="c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.409157 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn566" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.441618 4805 scope.go:117] "RemoveContainer" containerID="f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.465298 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qn566"] Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.478850 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qn566"] Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.481415 4805 scope.go:117] "RemoveContainer" containerID="e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.545370 4805 scope.go:117] "RemoveContainer" containerID="c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e" Feb 16 22:01:38 crc kubenswrapper[4805]: E0216 22:01:38.545827 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e\": container with ID starting with c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e not found: ID does not exist" containerID="c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.545857 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e"} err="failed to get container status \"c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e\": rpc error: code = NotFound desc = could not find container \"c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e\": container with ID starting with c5e1c66b6c3225b2933a14a4ac92a888eafeec0c4a163cf03ee7850c1cb5625e not found: ID does not exist" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.545881 4805 scope.go:117] "RemoveContainer" containerID="f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589" Feb 16 22:01:38 crc kubenswrapper[4805]: E0216 22:01:38.546344 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589\": container with ID starting with f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589 not found: ID does not exist" containerID="f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.546408 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589"} err="failed to get container status \"f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589\": rpc error: code = NotFound desc = could not find container \"f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589\": container with ID starting with f85c0b42523d74895ae639b2cd55c51264a49ebba54756ba147d7adb40031589 not found: ID does not exist" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.546440 4805 scope.go:117] "RemoveContainer" containerID="e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e" Feb 16 22:01:38 crc kubenswrapper[4805]: E0216 22:01:38.546740 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e\": container with ID starting with e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e not found: ID does not exist" containerID="e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e" Feb 16 22:01:38 crc kubenswrapper[4805]: I0216 22:01:38.546771 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e"} err="failed to get container status \"e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e\": rpc error: code = NotFound desc = could not find container \"e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e\": container with ID starting with e2b54aed1b2f0a4b771adb2d2160913dca08ad104e1cea28589237e6663d7c7e not found: ID does not exist" Feb 16 22:01:39 crc kubenswrapper[4805]: I0216 22:01:39.427252 4805 generic.go:334] "Generic (PLEG): container finished" podID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerID="de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba" exitCode=0 Feb 16 22:01:39 crc kubenswrapper[4805]: I0216 22:01:39.427338 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68blv" event={"ID":"4b51cc4a-8c49-465e-94df-030e188d1b29","Type":"ContainerDied","Data":"de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba"} Feb 16 22:01:39 crc kubenswrapper[4805]: I0216 22:01:39.610916 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ac2568-8835-4dc3-bb14-a951d36a0552" path="/var/lib/kubelet/pods/99ac2568-8835-4dc3-bb14-a951d36a0552/volumes" Feb 16 22:01:40 crc kubenswrapper[4805]: I0216 22:01:40.444106 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68blv" event={"ID":"4b51cc4a-8c49-465e-94df-030e188d1b29","Type":"ContainerStarted","Data":"e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff"} Feb 16 22:01:41 crc kubenswrapper[4805]: I0216 22:01:41.805767 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="26f1c84d-9566-4135-a24a-ce299c76a102" containerName="galera" probeResult="failure" output="command timed out" Feb 16 22:01:41 crc kubenswrapper[4805]: I0216 22:01:41.805798 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="26f1c84d-9566-4135-a24a-ce299c76a102" containerName="galera" probeResult="failure" output="command timed out" Feb 16 22:01:42 crc kubenswrapper[4805]: I0216 22:01:42.546280 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:42 crc kubenswrapper[4805]: I0216 22:01:42.546344 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:01:43 crc kubenswrapper[4805]: I0216 22:01:43.647654 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68blv" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="registry-server" probeResult="failure" output=< Feb 16 22:01:43 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 22:01:43 crc kubenswrapper[4805]: > Feb 16 22:01:49 crc kubenswrapper[4805]: E0216 22:01:49.601614 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:01:49 crc kubenswrapper[4805]: I0216 22:01:49.601762 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:01:49 crc kubenswrapper[4805]: E0216 22:01:49.735325 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:01:49 crc kubenswrapper[4805]: E0216 22:01:49.735670 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:01:49 crc kubenswrapper[4805]: E0216 22:01:49.735819 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:01:49 crc kubenswrapper[4805]: E0216 22:01:49.736985 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:01:53 crc kubenswrapper[4805]: I0216 22:01:53.595677 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68blv" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="registry-server" probeResult="failure" output=< Feb 16 22:01:53 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 22:01:53 crc kubenswrapper[4805]: > Feb 16 22:02:00 crc kubenswrapper[4805]: E0216 22:02:00.600721 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:02:00 crc kubenswrapper[4805]: I0216 22:02:00.620804 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-68blv" podStartSLOduration=22.125302939 podStartE2EDuration="28.620708054s" podCreationTimestamp="2026-02-16 22:01:32 +0000 UTC" firstStartedPulling="2026-02-16 22:01:33.347015697 +0000 UTC m=+3911.165698992" lastFinishedPulling="2026-02-16 22:01:39.842420812 +0000 UTC m=+3917.661104107" observedRunningTime="2026-02-16 22:01:40.489215361 +0000 UTC m=+3918.307898666" watchObservedRunningTime="2026-02-16 22:02:00.620708054 +0000 UTC m=+3938.439391359" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.054186 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk"] Feb 16 22:02:03 crc kubenswrapper[4805]: E0216 22:02:03.057907 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerName="extract-utilities" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.057942 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerName="extract-utilities" Feb 16 22:02:03 crc kubenswrapper[4805]: E0216 22:02:03.058021 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerName="registry-server" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.058031 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerName="registry-server" Feb 16 22:02:03 crc kubenswrapper[4805]: E0216 22:02:03.058062 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerName="extract-content" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.058074 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerName="extract-content" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.059294 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ac2568-8835-4dc3-bb14-a951d36a0552" containerName="registry-server" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.060854 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.064509 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.064630 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.064657 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.065153 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk"] Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.065161 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.189188 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.189250 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.189392 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb4gc\" (UniqueName: \"kubernetes.io/projected/fe35a496-fcca-49d1-92f0-1356c05feb2b-kube-api-access-gb4gc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.291955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.292439 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.292632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb4gc\" (UniqueName: \"kubernetes.io/projected/fe35a496-fcca-49d1-92f0-1356c05feb2b-kube-api-access-gb4gc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.298446 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.298923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.311518 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb4gc\" (UniqueName: \"kubernetes.io/projected/fe35a496-fcca-49d1-92f0-1356c05feb2b-kube-api-access-gb4gc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.391854 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.606887 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68blv" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="registry-server" probeResult="failure" output=< Feb 16 22:02:03 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 22:02:03 crc kubenswrapper[4805]: > Feb 16 22:02:03 crc kubenswrapper[4805]: I0216 22:02:03.960214 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk"] Feb 16 22:02:04 crc kubenswrapper[4805]: W0216 22:02:04.420566 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe35a496_fcca_49d1_92f0_1356c05feb2b.slice/crio-b43712212b160d85e8a703cd23dcfd4da64a0eb1b4fa6c3006f008172dee0f38 WatchSource:0}: Error finding container b43712212b160d85e8a703cd23dcfd4da64a0eb1b4fa6c3006f008172dee0f38: Status 404 returned error can't find the container with id b43712212b160d85e8a703cd23dcfd4da64a0eb1b4fa6c3006f008172dee0f38 Feb 16 22:02:04 crc kubenswrapper[4805]: E0216 22:02:04.599765 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:02:04 crc kubenswrapper[4805]: I0216 22:02:04.685384 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" event={"ID":"fe35a496-fcca-49d1-92f0-1356c05feb2b","Type":"ContainerStarted","Data":"b43712212b160d85e8a703cd23dcfd4da64a0eb1b4fa6c3006f008172dee0f38"} Feb 16 22:02:05 crc kubenswrapper[4805]: I0216 22:02:05.700791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" event={"ID":"fe35a496-fcca-49d1-92f0-1356c05feb2b","Type":"ContainerStarted","Data":"48f4aa2e0712ec286a1ded8f15aee750bd184cf7c2d809e547d0d247429b724e"} Feb 16 22:02:05 crc kubenswrapper[4805]: I0216 22:02:05.728814 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" podStartSLOduration=2.245272037 podStartE2EDuration="2.728792552s" podCreationTimestamp="2026-02-16 22:02:03 +0000 UTC" firstStartedPulling="2026-02-16 22:02:04.423385605 +0000 UTC m=+3942.242068890" lastFinishedPulling="2026-02-16 22:02:04.90690612 +0000 UTC m=+3942.725589405" observedRunningTime="2026-02-16 22:02:05.715105874 +0000 UTC m=+3943.533789169" watchObservedRunningTime="2026-02-16 22:02:05.728792552 +0000 UTC m=+3943.547475847" Feb 16 22:02:13 crc kubenswrapper[4805]: I0216 22:02:13.594547 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68blv" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="registry-server" probeResult="failure" output=< Feb 16 22:02:13 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 22:02:13 crc kubenswrapper[4805]: > Feb 16 22:02:13 crc kubenswrapper[4805]: E0216 22:02:13.611637 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:02:19 crc kubenswrapper[4805]: E0216 22:02:19.734922 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:02:19 crc kubenswrapper[4805]: E0216 22:02:19.735457 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:02:19 crc kubenswrapper[4805]: E0216 22:02:19.735580 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:02:19 crc kubenswrapper[4805]: E0216 22:02:19.736788 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:02:22 crc kubenswrapper[4805]: I0216 22:02:22.597678 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:02:22 crc kubenswrapper[4805]: I0216 22:02:22.646910 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:02:22 crc kubenswrapper[4805]: I0216 22:02:22.835253 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68blv"] Feb 16 22:02:23 crc kubenswrapper[4805]: I0216 22:02:23.946555 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-68blv" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="registry-server" containerID="cri-o://e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff" gracePeriod=2 Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.496066 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.657614 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-utilities\") pod \"4b51cc4a-8c49-465e-94df-030e188d1b29\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.657995 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66ssb\" (UniqueName: \"kubernetes.io/projected/4b51cc4a-8c49-465e-94df-030e188d1b29-kube-api-access-66ssb\") pod \"4b51cc4a-8c49-465e-94df-030e188d1b29\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.658188 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-catalog-content\") pod \"4b51cc4a-8c49-465e-94df-030e188d1b29\" (UID: \"4b51cc4a-8c49-465e-94df-030e188d1b29\") " Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.663810 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-utilities" (OuterVolumeSpecName: "utilities") pod "4b51cc4a-8c49-465e-94df-030e188d1b29" (UID: "4b51cc4a-8c49-465e-94df-030e188d1b29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.666172 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b51cc4a-8c49-465e-94df-030e188d1b29-kube-api-access-66ssb" (OuterVolumeSpecName: "kube-api-access-66ssb") pod "4b51cc4a-8c49-465e-94df-030e188d1b29" (UID: "4b51cc4a-8c49-465e-94df-030e188d1b29"). InnerVolumeSpecName "kube-api-access-66ssb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.761625 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.761676 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66ssb\" (UniqueName: \"kubernetes.io/projected/4b51cc4a-8c49-465e-94df-030e188d1b29-kube-api-access-66ssb\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.782066 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b51cc4a-8c49-465e-94df-030e188d1b29" (UID: "4b51cc4a-8c49-465e-94df-030e188d1b29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.864155 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51cc4a-8c49-465e-94df-030e188d1b29-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.959418 4805 generic.go:334] "Generic (PLEG): container finished" podID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerID="e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff" exitCode=0 Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.959488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68blv" event={"ID":"4b51cc4a-8c49-465e-94df-030e188d1b29","Type":"ContainerDied","Data":"e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff"} Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.959509 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68blv" Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.959528 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68blv" event={"ID":"4b51cc4a-8c49-465e-94df-030e188d1b29","Type":"ContainerDied","Data":"cc13f895a52c06fbd88af14cd544827a0423733fd939b4701725d6873eeecec7"} Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.959554 4805 scope.go:117] "RemoveContainer" containerID="e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff" Feb 16 22:02:24 crc kubenswrapper[4805]: I0216 22:02:24.993123 4805 scope.go:117] "RemoveContainer" containerID="de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba" Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.010961 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68blv"] Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.033306 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-68blv"] Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.041509 4805 scope.go:117] "RemoveContainer" containerID="4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1" Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.095854 4805 scope.go:117] "RemoveContainer" containerID="e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff" Feb 16 22:02:25 crc kubenswrapper[4805]: E0216 22:02:25.096338 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff\": container with ID starting with e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff not found: ID does not exist" containerID="e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff" Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.096378 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff"} err="failed to get container status \"e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff\": rpc error: code = NotFound desc = could not find container \"e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff\": container with ID starting with e76abdf342e295a4323af2ac39039f3c4f8dd12f7f05189bde0c13078b7fc8ff not found: ID does not exist" Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.096402 4805 scope.go:117] "RemoveContainer" containerID="de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba" Feb 16 22:02:25 crc kubenswrapper[4805]: E0216 22:02:25.100186 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba\": container with ID starting with de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba not found: ID does not exist" containerID="de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba" Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.100258 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba"} err="failed to get container status \"de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba\": rpc error: code = NotFound desc = could not find container \"de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba\": container with ID starting with de333695ab873d1c7af99a305017b136c5803b0a21941f4448c1234789b0bdba not found: ID does not exist" Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.100295 4805 scope.go:117] "RemoveContainer" containerID="4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1" Feb 16 22:02:25 crc kubenswrapper[4805]: E0216 22:02:25.100605 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1\": container with ID starting with 4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1 not found: ID does not exist" containerID="4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1" Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.100641 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1"} err="failed to get container status \"4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1\": rpc error: code = NotFound desc = could not find container \"4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1\": container with ID starting with 4ec8473c3b5e2bc13cf4cc8b8bfcaaa77828f3bee26731c1a5df7fe616a698c1 not found: ID does not exist" Feb 16 22:02:25 crc kubenswrapper[4805]: I0216 22:02:25.641449 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" path="/var/lib/kubelet/pods/4b51cc4a-8c49-465e-94df-030e188d1b29/volumes" Feb 16 22:02:27 crc kubenswrapper[4805]: E0216 22:02:27.601913 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:02:33 crc kubenswrapper[4805]: E0216 22:02:33.619980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:02:38 crc kubenswrapper[4805]: E0216 22:02:38.599479 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:02:44 crc kubenswrapper[4805]: E0216 22:02:44.599746 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:02:53 crc kubenswrapper[4805]: E0216 22:02:53.607897 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:02:55 crc kubenswrapper[4805]: E0216 22:02:55.602097 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:03:08 crc kubenswrapper[4805]: I0216 22:03:08.099955 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:03:08 crc kubenswrapper[4805]: I0216 22:03:08.103249 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:03:08 crc kubenswrapper[4805]: E0216 22:03:08.601009 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:03:10 crc kubenswrapper[4805]: E0216 22:03:10.602438 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:03:19 crc kubenswrapper[4805]: E0216 22:03:19.600300 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:03:22 crc kubenswrapper[4805]: E0216 22:03:22.601298 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:03:33 crc kubenswrapper[4805]: E0216 22:03:33.608141 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:03:37 crc kubenswrapper[4805]: E0216 22:03:37.599793 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:03:38 crc kubenswrapper[4805]: I0216 22:03:38.100072 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:03:38 crc kubenswrapper[4805]: I0216 22:03:38.100190 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:03:44 crc kubenswrapper[4805]: E0216 22:03:44.600898 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:03:52 crc kubenswrapper[4805]: E0216 22:03:52.599254 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:03:57 crc kubenswrapper[4805]: E0216 22:03:57.600401 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:04:03 crc kubenswrapper[4805]: E0216 22:04:03.608064 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:04:08 crc kubenswrapper[4805]: I0216 22:04:08.099551 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:04:08 crc kubenswrapper[4805]: I0216 22:04:08.100285 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:04:08 crc kubenswrapper[4805]: I0216 22:04:08.100349 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 22:04:08 crc kubenswrapper[4805]: I0216 22:04:08.101313 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ada81ab054aab5f35ec26766a68b4422744f5fd713c0f0116d56fc3c7eb55664"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:04:08 crc kubenswrapper[4805]: I0216 22:04:08.101420 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://ada81ab054aab5f35ec26766a68b4422744f5fd713c0f0116d56fc3c7eb55664" gracePeriod=600 Feb 16 22:04:08 crc kubenswrapper[4805]: E0216 22:04:08.599262 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:04:09 crc kubenswrapper[4805]: I0216 22:04:09.158884 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="ada81ab054aab5f35ec26766a68b4422744f5fd713c0f0116d56fc3c7eb55664" exitCode=0 Feb 16 22:04:09 crc kubenswrapper[4805]: I0216 22:04:09.158948 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"ada81ab054aab5f35ec26766a68b4422744f5fd713c0f0116d56fc3c7eb55664"} Feb 16 22:04:09 crc kubenswrapper[4805]: I0216 22:04:09.159308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd"} Feb 16 22:04:09 crc kubenswrapper[4805]: I0216 22:04:09.159334 4805 scope.go:117] "RemoveContainer" containerID="b7345989d0f9823013e770c8155c1623297bf12a85e52834a5a8123b643d6bb0" Feb 16 22:04:17 crc kubenswrapper[4805]: I0216 22:04:17.000300 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7688d557bc-2jgzd" podUID="95ea5d76-aedb-4a0a-a03d-fdc9140265e4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 22:04:18 crc kubenswrapper[4805]: E0216 22:04:18.606677 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:04:19 crc kubenswrapper[4805]: E0216 22:04:19.600431 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:04:31 crc kubenswrapper[4805]: E0216 22:04:31.603118 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:04:31 crc kubenswrapper[4805]: E0216 22:04:31.603350 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:04:45 crc kubenswrapper[4805]: E0216 22:04:45.600606 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:04:46 crc kubenswrapper[4805]: E0216 22:04:46.600393 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:04:57 crc kubenswrapper[4805]: E0216 22:04:57.600559 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.272145 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nqsn5"] Feb 16 22:04:58 crc kubenswrapper[4805]: E0216 22:04:58.272791 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="extract-utilities" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.272816 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="extract-utilities" Feb 16 22:04:58 crc kubenswrapper[4805]: E0216 22:04:58.272834 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="extract-content" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.272842 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="extract-content" Feb 16 22:04:58 crc kubenswrapper[4805]: E0216 22:04:58.272865 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="registry-server" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.272873 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="registry-server" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.273156 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b51cc4a-8c49-465e-94df-030e188d1b29" containerName="registry-server" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.276189 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.289259 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqsn5"] Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.359369 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxr2w\" (UniqueName: \"kubernetes.io/projected/6cc18996-ab19-43a8-bfa3-170b0818ff35-kube-api-access-qxr2w\") pod \"redhat-marketplace-nqsn5\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.359638 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-catalog-content\") pod \"redhat-marketplace-nqsn5\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.359690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-utilities\") pod \"redhat-marketplace-nqsn5\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.461626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-catalog-content\") pod \"redhat-marketplace-nqsn5\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.461686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-utilities\") pod \"redhat-marketplace-nqsn5\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.461744 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxr2w\" (UniqueName: \"kubernetes.io/projected/6cc18996-ab19-43a8-bfa3-170b0818ff35-kube-api-access-qxr2w\") pod \"redhat-marketplace-nqsn5\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.462119 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-catalog-content\") pod \"redhat-marketplace-nqsn5\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.462222 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-utilities\") pod \"redhat-marketplace-nqsn5\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.485480 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxr2w\" (UniqueName: \"kubernetes.io/projected/6cc18996-ab19-43a8-bfa3-170b0818ff35-kube-api-access-qxr2w\") pod \"redhat-marketplace-nqsn5\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: I0216 22:04:58.600124 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:04:58 crc kubenswrapper[4805]: E0216 22:04:58.601104 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:04:59 crc kubenswrapper[4805]: I0216 22:04:59.096911 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqsn5"] Feb 16 22:04:59 crc kubenswrapper[4805]: W0216 22:04:59.098553 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cc18996_ab19_43a8_bfa3_170b0818ff35.slice/crio-d60c1919b8fb78686a5056638aa4c2dd3387cb600f18bff3eeaf9ce36dca8a29 WatchSource:0}: Error finding container d60c1919b8fb78686a5056638aa4c2dd3387cb600f18bff3eeaf9ce36dca8a29: Status 404 returned error can't find the container with id d60c1919b8fb78686a5056638aa4c2dd3387cb600f18bff3eeaf9ce36dca8a29 Feb 16 22:04:59 crc kubenswrapper[4805]: I0216 22:04:59.732682 4805 generic.go:334] "Generic (PLEG): container finished" podID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerID="201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d" exitCode=0 Feb 16 22:04:59 crc kubenswrapper[4805]: I0216 22:04:59.732754 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqsn5" event={"ID":"6cc18996-ab19-43a8-bfa3-170b0818ff35","Type":"ContainerDied","Data":"201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d"} Feb 16 22:04:59 crc kubenswrapper[4805]: I0216 22:04:59.732798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqsn5" event={"ID":"6cc18996-ab19-43a8-bfa3-170b0818ff35","Type":"ContainerStarted","Data":"d60c1919b8fb78686a5056638aa4c2dd3387cb600f18bff3eeaf9ce36dca8a29"} Feb 16 22:05:00 crc kubenswrapper[4805]: I0216 22:05:00.745331 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqsn5" event={"ID":"6cc18996-ab19-43a8-bfa3-170b0818ff35","Type":"ContainerStarted","Data":"7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04"} Feb 16 22:05:01 crc kubenswrapper[4805]: I0216 22:05:01.756497 4805 generic.go:334] "Generic (PLEG): container finished" podID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerID="7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04" exitCode=0 Feb 16 22:05:01 crc kubenswrapper[4805]: I0216 22:05:01.756598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqsn5" event={"ID":"6cc18996-ab19-43a8-bfa3-170b0818ff35","Type":"ContainerDied","Data":"7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04"} Feb 16 22:05:02 crc kubenswrapper[4805]: I0216 22:05:02.771575 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqsn5" event={"ID":"6cc18996-ab19-43a8-bfa3-170b0818ff35","Type":"ContainerStarted","Data":"5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944"} Feb 16 22:05:02 crc kubenswrapper[4805]: I0216 22:05:02.813991 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nqsn5" podStartSLOduration=2.3962443110000002 podStartE2EDuration="4.813964989s" podCreationTimestamp="2026-02-16 22:04:58 +0000 UTC" firstStartedPulling="2026-02-16 22:04:59.735324132 +0000 UTC m=+4117.554007457" lastFinishedPulling="2026-02-16 22:05:02.15304483 +0000 UTC m=+4119.971728135" observedRunningTime="2026-02-16 22:05:02.79463743 +0000 UTC m=+4120.613320725" watchObservedRunningTime="2026-02-16 22:05:02.813964989 +0000 UTC m=+4120.632648304" Feb 16 22:05:08 crc kubenswrapper[4805]: E0216 22:05:08.600480 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:05:08 crc kubenswrapper[4805]: I0216 22:05:08.600510 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:05:08 crc kubenswrapper[4805]: I0216 22:05:08.601280 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:05:08 crc kubenswrapper[4805]: I0216 22:05:08.649069 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:05:08 crc kubenswrapper[4805]: I0216 22:05:08.884906 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:05:08 crc kubenswrapper[4805]: I0216 22:05:08.948396 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqsn5"] Feb 16 22:05:10 crc kubenswrapper[4805]: I0216 22:05:10.860140 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nqsn5" podUID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerName="registry-server" containerID="cri-o://5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944" gracePeriod=2 Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.564364 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.719181 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-utilities\") pod \"6cc18996-ab19-43a8-bfa3-170b0818ff35\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.719289 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxr2w\" (UniqueName: \"kubernetes.io/projected/6cc18996-ab19-43a8-bfa3-170b0818ff35-kube-api-access-qxr2w\") pod \"6cc18996-ab19-43a8-bfa3-170b0818ff35\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.719387 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-catalog-content\") pod \"6cc18996-ab19-43a8-bfa3-170b0818ff35\" (UID: \"6cc18996-ab19-43a8-bfa3-170b0818ff35\") " Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.720284 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-utilities" (OuterVolumeSpecName: "utilities") pod "6cc18996-ab19-43a8-bfa3-170b0818ff35" (UID: "6cc18996-ab19-43a8-bfa3-170b0818ff35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.721401 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.728407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc18996-ab19-43a8-bfa3-170b0818ff35-kube-api-access-qxr2w" (OuterVolumeSpecName: "kube-api-access-qxr2w") pod "6cc18996-ab19-43a8-bfa3-170b0818ff35" (UID: "6cc18996-ab19-43a8-bfa3-170b0818ff35"). InnerVolumeSpecName "kube-api-access-qxr2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.755253 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cc18996-ab19-43a8-bfa3-170b0818ff35" (UID: "6cc18996-ab19-43a8-bfa3-170b0818ff35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.823591 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxr2w\" (UniqueName: \"kubernetes.io/projected/6cc18996-ab19-43a8-bfa3-170b0818ff35-kube-api-access-qxr2w\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.823635 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc18996-ab19-43a8-bfa3-170b0818ff35-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.872215 4805 generic.go:334] "Generic (PLEG): container finished" podID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerID="5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944" exitCode=0 Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.872265 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqsn5" event={"ID":"6cc18996-ab19-43a8-bfa3-170b0818ff35","Type":"ContainerDied","Data":"5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944"} Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.872278 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqsn5" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.872318 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqsn5" event={"ID":"6cc18996-ab19-43a8-bfa3-170b0818ff35","Type":"ContainerDied","Data":"d60c1919b8fb78686a5056638aa4c2dd3387cb600f18bff3eeaf9ce36dca8a29"} Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.872338 4805 scope.go:117] "RemoveContainer" containerID="5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.906463 4805 scope.go:117] "RemoveContainer" containerID="7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.909389 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqsn5"] Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.918842 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqsn5"] Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.931069 4805 scope.go:117] "RemoveContainer" containerID="201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.995301 4805 scope.go:117] "RemoveContainer" containerID="5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944" Feb 16 22:05:11 crc kubenswrapper[4805]: E0216 22:05:11.996270 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944\": container with ID starting with 5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944 not found: ID does not exist" containerID="5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.996352 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944"} err="failed to get container status \"5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944\": rpc error: code = NotFound desc = could not find container \"5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944\": container with ID starting with 5c001144de001f6a267c082639a05b1d78348f572069ace7dfe02073466c6944 not found: ID does not exist" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.996396 4805 scope.go:117] "RemoveContainer" containerID="7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04" Feb 16 22:05:11 crc kubenswrapper[4805]: E0216 22:05:11.996947 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04\": container with ID starting with 7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04 not found: ID does not exist" containerID="7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.996997 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04"} err="failed to get container status \"7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04\": rpc error: code = NotFound desc = could not find container \"7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04\": container with ID starting with 7d399e1f0069709cc0a31fea0272e8ffff683d3096dae7e5acabf0a74ae1ed04 not found: ID does not exist" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.997024 4805 scope.go:117] "RemoveContainer" containerID="201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d" Feb 16 22:05:11 crc kubenswrapper[4805]: E0216 22:05:11.997513 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d\": container with ID starting with 201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d not found: ID does not exist" containerID="201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d" Feb 16 22:05:11 crc kubenswrapper[4805]: I0216 22:05:11.997551 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d"} err="failed to get container status \"201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d\": rpc error: code = NotFound desc = could not find container \"201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d\": container with ID starting with 201dfe80dd1a92765a6befc5b2c3e8bc4e8dbd0d66a1c077447e1b70aef3b90d not found: ID does not exist" Feb 16 22:05:12 crc kubenswrapper[4805]: E0216 22:05:12.604208 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:05:13 crc kubenswrapper[4805]: I0216 22:05:13.623143 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc18996-ab19-43a8-bfa3-170b0818ff35" path="/var/lib/kubelet/pods/6cc18996-ab19-43a8-bfa3-170b0818ff35/volumes" Feb 16 22:05:20 crc kubenswrapper[4805]: E0216 22:05:20.602181 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:05:24 crc kubenswrapper[4805]: E0216 22:05:24.599523 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:05:34 crc kubenswrapper[4805]: E0216 22:05:34.602144 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:05:36 crc kubenswrapper[4805]: E0216 22:05:36.600176 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:05:47 crc kubenswrapper[4805]: E0216 22:05:47.599712 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:05:48 crc kubenswrapper[4805]: E0216 22:05:48.599977 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:05:58 crc kubenswrapper[4805]: E0216 22:05:58.599675 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.540670 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-prwcp"] Feb 16 22:06:01 crc kubenswrapper[4805]: E0216 22:06:01.541871 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerName="extract-utilities" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.541892 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerName="extract-utilities" Feb 16 22:06:01 crc kubenswrapper[4805]: E0216 22:06:01.541909 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerName="registry-server" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.541917 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerName="registry-server" Feb 16 22:06:01 crc kubenswrapper[4805]: E0216 22:06:01.541982 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerName="extract-content" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.541991 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerName="extract-content" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.542275 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc18996-ab19-43a8-bfa3-170b0818ff35" containerName="registry-server" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.544502 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.562317 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prwcp"] Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.677119 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-utilities\") pod \"community-operators-prwcp\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.677215 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-catalog-content\") pod \"community-operators-prwcp\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.677247 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjdlt\" (UniqueName: \"kubernetes.io/projected/42b3db38-5e15-4583-ae53-10aefe2afea6-kube-api-access-rjdlt\") pod \"community-operators-prwcp\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.780108 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-utilities\") pod \"community-operators-prwcp\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.780195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-catalog-content\") pod \"community-operators-prwcp\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.780226 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjdlt\" (UniqueName: \"kubernetes.io/projected/42b3db38-5e15-4583-ae53-10aefe2afea6-kube-api-access-rjdlt\") pod \"community-operators-prwcp\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.780685 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-utilities\") pod \"community-operators-prwcp\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.780797 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-catalog-content\") pod \"community-operators-prwcp\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.805412 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjdlt\" (UniqueName: \"kubernetes.io/projected/42b3db38-5e15-4583-ae53-10aefe2afea6-kube-api-access-rjdlt\") pod \"community-operators-prwcp\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:01 crc kubenswrapper[4805]: I0216 22:06:01.876636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:02 crc kubenswrapper[4805]: I0216 22:06:02.537403 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prwcp"] Feb 16 22:06:03 crc kubenswrapper[4805]: I0216 22:06:03.439907 4805 generic.go:334] "Generic (PLEG): container finished" podID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerID="4114d838c764885a0d9529fc7f5605c69a3adcf396c40f551d08f297ad1b5950" exitCode=0 Feb 16 22:06:03 crc kubenswrapper[4805]: I0216 22:06:03.439993 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prwcp" event={"ID":"42b3db38-5e15-4583-ae53-10aefe2afea6","Type":"ContainerDied","Data":"4114d838c764885a0d9529fc7f5605c69a3adcf396c40f551d08f297ad1b5950"} Feb 16 22:06:03 crc kubenswrapper[4805]: I0216 22:06:03.440466 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prwcp" event={"ID":"42b3db38-5e15-4583-ae53-10aefe2afea6","Type":"ContainerStarted","Data":"1dbcaea203d93036d4562301615155d90551cc6ce47e9338d4c7573890f693ff"} Feb 16 22:06:03 crc kubenswrapper[4805]: E0216 22:06:03.621376 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:06:04 crc kubenswrapper[4805]: I0216 22:06:04.450594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prwcp" event={"ID":"42b3db38-5e15-4583-ae53-10aefe2afea6","Type":"ContainerStarted","Data":"5b5dbce27dda2338802f050f52f3f7031fcba59c6824d40d256a8f499749c9e8"} Feb 16 22:06:06 crc kubenswrapper[4805]: I0216 22:06:06.474199 4805 generic.go:334] "Generic (PLEG): container finished" podID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerID="5b5dbce27dda2338802f050f52f3f7031fcba59c6824d40d256a8f499749c9e8" exitCode=0 Feb 16 22:06:06 crc kubenswrapper[4805]: I0216 22:06:06.474294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prwcp" event={"ID":"42b3db38-5e15-4583-ae53-10aefe2afea6","Type":"ContainerDied","Data":"5b5dbce27dda2338802f050f52f3f7031fcba59c6824d40d256a8f499749c9e8"} Feb 16 22:06:07 crc kubenswrapper[4805]: I0216 22:06:07.489457 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prwcp" event={"ID":"42b3db38-5e15-4583-ae53-10aefe2afea6","Type":"ContainerStarted","Data":"c71026f0410f5cb37b31cde957f1643bb2fab824e64ba0d03e2308b7678f529e"} Feb 16 22:06:07 crc kubenswrapper[4805]: I0216 22:06:07.511135 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-prwcp" podStartSLOduration=3.096470409 podStartE2EDuration="6.511116518s" podCreationTimestamp="2026-02-16 22:06:01 +0000 UTC" firstStartedPulling="2026-02-16 22:06:03.442207178 +0000 UTC m=+4181.260890483" lastFinishedPulling="2026-02-16 22:06:06.856853297 +0000 UTC m=+4184.675536592" observedRunningTime="2026-02-16 22:06:07.504907491 +0000 UTC m=+4185.323590796" watchObservedRunningTime="2026-02-16 22:06:07.511116518 +0000 UTC m=+4185.329799813" Feb 16 22:06:08 crc kubenswrapper[4805]: I0216 22:06:08.099115 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:06:08 crc kubenswrapper[4805]: I0216 22:06:08.099462 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:06:09 crc kubenswrapper[4805]: E0216 22:06:09.602010 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:06:11 crc kubenswrapper[4805]: I0216 22:06:11.877347 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:11 crc kubenswrapper[4805]: I0216 22:06:11.878886 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:12 crc kubenswrapper[4805]: I0216 22:06:12.038497 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:12 crc kubenswrapper[4805]: I0216 22:06:12.612578 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:12 crc kubenswrapper[4805]: I0216 22:06:12.667971 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prwcp"] Feb 16 22:06:14 crc kubenswrapper[4805]: I0216 22:06:14.586429 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-prwcp" podUID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerName="registry-server" containerID="cri-o://c71026f0410f5cb37b31cde957f1643bb2fab824e64ba0d03e2308b7678f529e" gracePeriod=2 Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.605246 4805 generic.go:334] "Generic (PLEG): container finished" podID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerID="c71026f0410f5cb37b31cde957f1643bb2fab824e64ba0d03e2308b7678f529e" exitCode=0 Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.612009 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prwcp" event={"ID":"42b3db38-5e15-4583-ae53-10aefe2afea6","Type":"ContainerDied","Data":"c71026f0410f5cb37b31cde957f1643bb2fab824e64ba0d03e2308b7678f529e"} Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.612052 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prwcp" event={"ID":"42b3db38-5e15-4583-ae53-10aefe2afea6","Type":"ContainerDied","Data":"1dbcaea203d93036d4562301615155d90551cc6ce47e9338d4c7573890f693ff"} Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.612063 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dbcaea203d93036d4562301615155d90551cc6ce47e9338d4c7573890f693ff" Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.617965 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.653266 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjdlt\" (UniqueName: \"kubernetes.io/projected/42b3db38-5e15-4583-ae53-10aefe2afea6-kube-api-access-rjdlt\") pod \"42b3db38-5e15-4583-ae53-10aefe2afea6\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.653389 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-utilities\") pod \"42b3db38-5e15-4583-ae53-10aefe2afea6\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.653444 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-catalog-content\") pod \"42b3db38-5e15-4583-ae53-10aefe2afea6\" (UID: \"42b3db38-5e15-4583-ae53-10aefe2afea6\") " Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.660803 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-utilities" (OuterVolumeSpecName: "utilities") pod "42b3db38-5e15-4583-ae53-10aefe2afea6" (UID: "42b3db38-5e15-4583-ae53-10aefe2afea6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.672800 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b3db38-5e15-4583-ae53-10aefe2afea6-kube-api-access-rjdlt" (OuterVolumeSpecName: "kube-api-access-rjdlt") pod "42b3db38-5e15-4583-ae53-10aefe2afea6" (UID: "42b3db38-5e15-4583-ae53-10aefe2afea6"). InnerVolumeSpecName "kube-api-access-rjdlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.723841 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42b3db38-5e15-4583-ae53-10aefe2afea6" (UID: "42b3db38-5e15-4583-ae53-10aefe2afea6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.754848 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.754883 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b3db38-5e15-4583-ae53-10aefe2afea6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:06:15 crc kubenswrapper[4805]: I0216 22:06:15.754899 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjdlt\" (UniqueName: \"kubernetes.io/projected/42b3db38-5e15-4583-ae53-10aefe2afea6-kube-api-access-rjdlt\") on node \"crc\" DevicePath \"\"" Feb 16 22:06:16 crc kubenswrapper[4805]: I0216 22:06:16.613299 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prwcp" Feb 16 22:06:16 crc kubenswrapper[4805]: I0216 22:06:16.656125 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prwcp"] Feb 16 22:06:16 crc kubenswrapper[4805]: I0216 22:06:16.673872 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-prwcp"] Feb 16 22:06:17 crc kubenswrapper[4805]: E0216 22:06:17.600014 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:06:17 crc kubenswrapper[4805]: I0216 22:06:17.612617 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b3db38-5e15-4583-ae53-10aefe2afea6" path="/var/lib/kubelet/pods/42b3db38-5e15-4583-ae53-10aefe2afea6/volumes" Feb 16 22:06:20 crc kubenswrapper[4805]: E0216 22:06:20.600148 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:06:31 crc kubenswrapper[4805]: E0216 22:06:31.600493 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:06:33 crc kubenswrapper[4805]: E0216 22:06:33.608157 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:06:38 crc kubenswrapper[4805]: I0216 22:06:38.100165 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:06:38 crc kubenswrapper[4805]: I0216 22:06:38.100801 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:06:44 crc kubenswrapper[4805]: E0216 22:06:44.600839 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:06:45 crc kubenswrapper[4805]: E0216 22:06:45.599542 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:06:55 crc kubenswrapper[4805]: I0216 22:06:55.604469 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:06:55 crc kubenswrapper[4805]: E0216 22:06:55.729017 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:06:55 crc kubenswrapper[4805]: E0216 22:06:55.729410 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:06:55 crc kubenswrapper[4805]: E0216 22:06:55.729584 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:06:55 crc kubenswrapper[4805]: E0216 22:06:55.730972 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:06:56 crc kubenswrapper[4805]: E0216 22:06:56.599457 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:07:08 crc kubenswrapper[4805]: I0216 22:07:08.100113 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:07:08 crc kubenswrapper[4805]: I0216 22:07:08.100772 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:07:08 crc kubenswrapper[4805]: I0216 22:07:08.100830 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 22:07:08 crc kubenswrapper[4805]: I0216 22:07:08.102100 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:07:08 crc kubenswrapper[4805]: I0216 22:07:08.102198 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" gracePeriod=600 Feb 16 22:07:08 crc kubenswrapper[4805]: E0216 22:07:08.227322 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:07:09 crc kubenswrapper[4805]: I0216 22:07:09.230919 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" exitCode=0 Feb 16 22:07:09 crc kubenswrapper[4805]: I0216 22:07:09.230981 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd"} Feb 16 22:07:09 crc kubenswrapper[4805]: I0216 22:07:09.231311 4805 scope.go:117] "RemoveContainer" containerID="ada81ab054aab5f35ec26766a68b4422744f5fd713c0f0116d56fc3c7eb55664" Feb 16 22:07:09 crc kubenswrapper[4805]: I0216 22:07:09.232447 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:07:09 crc kubenswrapper[4805]: E0216 22:07:09.232933 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:07:10 crc kubenswrapper[4805]: E0216 22:07:10.601128 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:07:11 crc kubenswrapper[4805]: E0216 22:07:11.602565 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:07:20 crc kubenswrapper[4805]: I0216 22:07:20.598347 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:07:20 crc kubenswrapper[4805]: E0216 22:07:20.599509 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:07:21 crc kubenswrapper[4805]: E0216 22:07:21.737685 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:07:21 crc kubenswrapper[4805]: E0216 22:07:21.738052 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:07:21 crc kubenswrapper[4805]: E0216 22:07:21.738233 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:07:21 crc kubenswrapper[4805]: E0216 22:07:21.739526 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:07:23 crc kubenswrapper[4805]: E0216 22:07:23.615903 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:07:33 crc kubenswrapper[4805]: E0216 22:07:33.621587 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:07:34 crc kubenswrapper[4805]: I0216 22:07:34.599475 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:07:34 crc kubenswrapper[4805]: E0216 22:07:34.601372 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:07:36 crc kubenswrapper[4805]: E0216 22:07:36.599800 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:07:46 crc kubenswrapper[4805]: E0216 22:07:46.601035 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:07:48 crc kubenswrapper[4805]: I0216 22:07:48.597959 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:07:48 crc kubenswrapper[4805]: E0216 22:07:48.598833 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:07:50 crc kubenswrapper[4805]: E0216 22:07:50.600968 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:07:59 crc kubenswrapper[4805]: I0216 22:07:59.599202 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:07:59 crc kubenswrapper[4805]: E0216 22:07:59.599999 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:08:01 crc kubenswrapper[4805]: E0216 22:08:01.607415 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:08:03 crc kubenswrapper[4805]: E0216 22:08:03.608501 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:08:14 crc kubenswrapper[4805]: I0216 22:08:14.599763 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:08:14 crc kubenswrapper[4805]: E0216 22:08:14.603014 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:08:14 crc kubenswrapper[4805]: E0216 22:08:14.603320 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:08:16 crc kubenswrapper[4805]: E0216 22:08:16.599892 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:08:26 crc kubenswrapper[4805]: I0216 22:08:26.598364 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:08:26 crc kubenswrapper[4805]: E0216 22:08:26.599132 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:08:28 crc kubenswrapper[4805]: E0216 22:08:28.600425 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:08:29 crc kubenswrapper[4805]: I0216 22:08:29.111838 4805 generic.go:334] "Generic (PLEG): container finished" podID="fe35a496-fcca-49d1-92f0-1356c05feb2b" containerID="48f4aa2e0712ec286a1ded8f15aee750bd184cf7c2d809e547d0d247429b724e" exitCode=2 Feb 16 22:08:29 crc kubenswrapper[4805]: I0216 22:08:29.111889 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" event={"ID":"fe35a496-fcca-49d1-92f0-1356c05feb2b","Type":"ContainerDied","Data":"48f4aa2e0712ec286a1ded8f15aee750bd184cf7c2d809e547d0d247429b724e"} Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.648180 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.796153 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-inventory\") pod \"fe35a496-fcca-49d1-92f0-1356c05feb2b\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.796270 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-ssh-key-openstack-edpm-ipam\") pod \"fe35a496-fcca-49d1-92f0-1356c05feb2b\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.796449 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb4gc\" (UniqueName: \"kubernetes.io/projected/fe35a496-fcca-49d1-92f0-1356c05feb2b-kube-api-access-gb4gc\") pod \"fe35a496-fcca-49d1-92f0-1356c05feb2b\" (UID: \"fe35a496-fcca-49d1-92f0-1356c05feb2b\") " Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.808072 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe35a496-fcca-49d1-92f0-1356c05feb2b-kube-api-access-gb4gc" (OuterVolumeSpecName: "kube-api-access-gb4gc") pod "fe35a496-fcca-49d1-92f0-1356c05feb2b" (UID: "fe35a496-fcca-49d1-92f0-1356c05feb2b"). InnerVolumeSpecName "kube-api-access-gb4gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.826957 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-inventory" (OuterVolumeSpecName: "inventory") pod "fe35a496-fcca-49d1-92f0-1356c05feb2b" (UID: "fe35a496-fcca-49d1-92f0-1356c05feb2b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.829077 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fe35a496-fcca-49d1-92f0-1356c05feb2b" (UID: "fe35a496-fcca-49d1-92f0-1356c05feb2b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.899560 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.899602 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe35a496-fcca-49d1-92f0-1356c05feb2b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:08:30 crc kubenswrapper[4805]: I0216 22:08:30.899618 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb4gc\" (UniqueName: \"kubernetes.io/projected/fe35a496-fcca-49d1-92f0-1356c05feb2b-kube-api-access-gb4gc\") on node \"crc\" DevicePath \"\"" Feb 16 22:08:31 crc kubenswrapper[4805]: I0216 22:08:31.134982 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" event={"ID":"fe35a496-fcca-49d1-92f0-1356c05feb2b","Type":"ContainerDied","Data":"b43712212b160d85e8a703cd23dcfd4da64a0eb1b4fa6c3006f008172dee0f38"} Feb 16 22:08:31 crc kubenswrapper[4805]: I0216 22:08:31.135039 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b43712212b160d85e8a703cd23dcfd4da64a0eb1b4fa6c3006f008172dee0f38" Feb 16 22:08:31 crc kubenswrapper[4805]: I0216 22:08:31.135112 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk" Feb 16 22:08:31 crc kubenswrapper[4805]: E0216 22:08:31.601187 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:08:37 crc kubenswrapper[4805]: I0216 22:08:37.599335 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:08:37 crc kubenswrapper[4805]: E0216 22:08:37.600407 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:08:40 crc kubenswrapper[4805]: E0216 22:08:40.602066 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:08:42 crc kubenswrapper[4805]: E0216 22:08:42.600877 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:08:50 crc kubenswrapper[4805]: I0216 22:08:50.598038 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:08:50 crc kubenswrapper[4805]: E0216 22:08:50.598732 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:08:51 crc kubenswrapper[4805]: E0216 22:08:51.600809 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:08:56 crc kubenswrapper[4805]: E0216 22:08:56.600640 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:09:02 crc kubenswrapper[4805]: E0216 22:09:02.600042 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:09:03 crc kubenswrapper[4805]: I0216 22:09:03.605525 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:09:03 crc kubenswrapper[4805]: E0216 22:09:03.607140 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:09:10 crc kubenswrapper[4805]: E0216 22:09:10.621301 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:09:13 crc kubenswrapper[4805]: E0216 22:09:13.608228 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:09:18 crc kubenswrapper[4805]: I0216 22:09:18.599045 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:09:18 crc kubenswrapper[4805]: E0216 22:09:18.600061 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:09:25 crc kubenswrapper[4805]: E0216 22:09:25.602107 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:09:26 crc kubenswrapper[4805]: E0216 22:09:26.600089 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:09:33 crc kubenswrapper[4805]: I0216 22:09:33.607818 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:09:33 crc kubenswrapper[4805]: E0216 22:09:33.608503 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:09:36 crc kubenswrapper[4805]: E0216 22:09:36.600705 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:09:37 crc kubenswrapper[4805]: E0216 22:09:37.600422 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:09:47 crc kubenswrapper[4805]: I0216 22:09:47.598341 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:09:47 crc kubenswrapper[4805]: E0216 22:09:47.599084 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:09:47 crc kubenswrapper[4805]: E0216 22:09:47.600290 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:09:49 crc kubenswrapper[4805]: E0216 22:09:49.602451 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:09:59 crc kubenswrapper[4805]: E0216 22:09:59.600806 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:10:00 crc kubenswrapper[4805]: I0216 22:10:00.600444 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:10:00 crc kubenswrapper[4805]: E0216 22:10:00.602089 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:10:01 crc kubenswrapper[4805]: E0216 22:10:01.599134 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:10:10 crc kubenswrapper[4805]: E0216 22:10:10.599763 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:10:11 crc kubenswrapper[4805]: I0216 22:10:11.598778 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:10:11 crc kubenswrapper[4805]: E0216 22:10:11.599417 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:10:14 crc kubenswrapper[4805]: E0216 22:10:14.601285 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:10:21 crc kubenswrapper[4805]: E0216 22:10:21.599578 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:10:24 crc kubenswrapper[4805]: I0216 22:10:24.598010 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:10:24 crc kubenswrapper[4805]: E0216 22:10:24.599114 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:10:29 crc kubenswrapper[4805]: E0216 22:10:29.600140 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:10:33 crc kubenswrapper[4805]: E0216 22:10:33.611199 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:10:39 crc kubenswrapper[4805]: I0216 22:10:39.598756 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:10:39 crc kubenswrapper[4805]: E0216 22:10:39.599810 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:10:43 crc kubenswrapper[4805]: E0216 22:10:43.608658 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:10:45 crc kubenswrapper[4805]: E0216 22:10:45.600060 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:10:54 crc kubenswrapper[4805]: I0216 22:10:54.597869 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:10:54 crc kubenswrapper[4805]: E0216 22:10:54.598851 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:10:56 crc kubenswrapper[4805]: E0216 22:10:56.601195 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:11:00 crc kubenswrapper[4805]: E0216 22:11:00.600136 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:11:06 crc kubenswrapper[4805]: I0216 22:11:06.598555 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:11:06 crc kubenswrapper[4805]: E0216 22:11:06.600270 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:11:07 crc kubenswrapper[4805]: E0216 22:11:07.600292 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:11:13 crc kubenswrapper[4805]: E0216 22:11:13.609479 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:11:19 crc kubenswrapper[4805]: I0216 22:11:19.597870 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:11:19 crc kubenswrapper[4805]: E0216 22:11:19.598699 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:11:20 crc kubenswrapper[4805]: E0216 22:11:20.602541 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:11:26 crc kubenswrapper[4805]: E0216 22:11:26.600659 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:11:34 crc kubenswrapper[4805]: I0216 22:11:34.598652 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:11:34 crc kubenswrapper[4805]: E0216 22:11:34.599426 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:11:34 crc kubenswrapper[4805]: E0216 22:11:34.601654 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:11:38 crc kubenswrapper[4805]: E0216 22:11:38.601671 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:11:45 crc kubenswrapper[4805]: E0216 22:11:45.600870 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:11:48 crc kubenswrapper[4805]: I0216 22:11:48.598121 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:11:48 crc kubenswrapper[4805]: E0216 22:11:48.598978 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:11:51 crc kubenswrapper[4805]: E0216 22:11:51.599872 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:11:57 crc kubenswrapper[4805]: E0216 22:11:57.600225 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:12:02 crc kubenswrapper[4805]: I0216 22:12:02.598773 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:12:02 crc kubenswrapper[4805]: E0216 22:12:02.599898 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:12:04 crc kubenswrapper[4805]: I0216 22:12:04.599904 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:12:04 crc kubenswrapper[4805]: E0216 22:12:04.707708 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:12:04 crc kubenswrapper[4805]: E0216 22:12:04.707997 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:12:04 crc kubenswrapper[4805]: E0216 22:12:04.708112 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:12:04 crc kubenswrapper[4805]: E0216 22:12:04.709366 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:12:08 crc kubenswrapper[4805]: E0216 22:12:08.600443 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.665034 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5thhd"] Feb 16 22:12:11 crc kubenswrapper[4805]: E0216 22:12:11.665900 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerName="registry-server" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.665916 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerName="registry-server" Feb 16 22:12:11 crc kubenswrapper[4805]: E0216 22:12:11.665946 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerName="extract-utilities" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.665954 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerName="extract-utilities" Feb 16 22:12:11 crc kubenswrapper[4805]: E0216 22:12:11.665986 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe35a496-fcca-49d1-92f0-1356c05feb2b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.665999 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe35a496-fcca-49d1-92f0-1356c05feb2b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:12:11 crc kubenswrapper[4805]: E0216 22:12:11.666011 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerName="extract-content" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.666018 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerName="extract-content" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.666238 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b3db38-5e15-4583-ae53-10aefe2afea6" containerName="registry-server" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.666267 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe35a496-fcca-49d1-92f0-1356c05feb2b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.668237 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.687963 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5thhd"] Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.828104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5qrk\" (UniqueName: \"kubernetes.io/projected/110bc074-f445-4e56-aa6c-0aba788756af-kube-api-access-c5qrk\") pod \"certified-operators-5thhd\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.828322 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-utilities\") pod \"certified-operators-5thhd\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.828400 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-catalog-content\") pod \"certified-operators-5thhd\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.931441 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5qrk\" (UniqueName: \"kubernetes.io/projected/110bc074-f445-4e56-aa6c-0aba788756af-kube-api-access-c5qrk\") pod \"certified-operators-5thhd\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.931549 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-utilities\") pod \"certified-operators-5thhd\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.931593 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-catalog-content\") pod \"certified-operators-5thhd\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.932174 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-utilities\") pod \"certified-operators-5thhd\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.932371 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-catalog-content\") pod \"certified-operators-5thhd\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.954781 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5qrk\" (UniqueName: \"kubernetes.io/projected/110bc074-f445-4e56-aa6c-0aba788756af-kube-api-access-c5qrk\") pod \"certified-operators-5thhd\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:11 crc kubenswrapper[4805]: I0216 22:12:11.997867 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:12 crc kubenswrapper[4805]: I0216 22:12:12.737438 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5thhd"] Feb 16 22:12:13 crc kubenswrapper[4805]: I0216 22:12:13.663922 4805 generic.go:334] "Generic (PLEG): container finished" podID="110bc074-f445-4e56-aa6c-0aba788756af" containerID="ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3" exitCode=0 Feb 16 22:12:13 crc kubenswrapper[4805]: I0216 22:12:13.664017 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5thhd" event={"ID":"110bc074-f445-4e56-aa6c-0aba788756af","Type":"ContainerDied","Data":"ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3"} Feb 16 22:12:13 crc kubenswrapper[4805]: I0216 22:12:13.664466 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5thhd" event={"ID":"110bc074-f445-4e56-aa6c-0aba788756af","Type":"ContainerStarted","Data":"4f49c3317aa33d0d699b99caa80fe1e28c2e32322c53e90383a749c19f639634"} Feb 16 22:12:14 crc kubenswrapper[4805]: I0216 22:12:14.676137 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5thhd" event={"ID":"110bc074-f445-4e56-aa6c-0aba788756af","Type":"ContainerStarted","Data":"1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9"} Feb 16 22:12:16 crc kubenswrapper[4805]: I0216 22:12:16.599053 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:12:16 crc kubenswrapper[4805]: I0216 22:12:16.704663 4805 generic.go:334] "Generic (PLEG): container finished" podID="110bc074-f445-4e56-aa6c-0aba788756af" containerID="1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9" exitCode=0 Feb 16 22:12:16 crc kubenswrapper[4805]: I0216 22:12:16.704708 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5thhd" event={"ID":"110bc074-f445-4e56-aa6c-0aba788756af","Type":"ContainerDied","Data":"1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9"} Feb 16 22:12:17 crc kubenswrapper[4805]: I0216 22:12:17.722913 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5thhd" event={"ID":"110bc074-f445-4e56-aa6c-0aba788756af","Type":"ContainerStarted","Data":"a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f"} Feb 16 22:12:17 crc kubenswrapper[4805]: I0216 22:12:17.730616 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"f87facc416a37b89d559829757c6df4025c9840a8cbd0b6efa4bdf4f3e6d1208"} Feb 16 22:12:17 crc kubenswrapper[4805]: I0216 22:12:17.751472 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5thhd" podStartSLOduration=3.321434316 podStartE2EDuration="6.751452814s" podCreationTimestamp="2026-02-16 22:12:11 +0000 UTC" firstStartedPulling="2026-02-16 22:12:13.665931716 +0000 UTC m=+4551.484615011" lastFinishedPulling="2026-02-16 22:12:17.095950214 +0000 UTC m=+4554.914633509" observedRunningTime="2026-02-16 22:12:17.743822679 +0000 UTC m=+4555.562505974" watchObservedRunningTime="2026-02-16 22:12:17.751452814 +0000 UTC m=+4555.570136109" Feb 16 22:12:19 crc kubenswrapper[4805]: E0216 22:12:19.600938 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:12:19 crc kubenswrapper[4805]: E0216 22:12:19.601452 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:12:21 crc kubenswrapper[4805]: I0216 22:12:21.998173 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:21 crc kubenswrapper[4805]: I0216 22:12:21.998799 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:22 crc kubenswrapper[4805]: I0216 22:12:22.057315 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:22 crc kubenswrapper[4805]: I0216 22:12:22.856191 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:22 crc kubenswrapper[4805]: I0216 22:12:22.918013 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5thhd"] Feb 16 22:12:24 crc kubenswrapper[4805]: I0216 22:12:24.875464 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5thhd" podUID="110bc074-f445-4e56-aa6c-0aba788756af" containerName="registry-server" containerID="cri-o://a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f" gracePeriod=2 Feb 16 22:12:25 crc kubenswrapper[4805]: I0216 22:12:25.463061 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:25 crc kubenswrapper[4805]: I0216 22:12:25.937138 4805 generic.go:334] "Generic (PLEG): container finished" podID="110bc074-f445-4e56-aa6c-0aba788756af" containerID="a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f" exitCode=0 Feb 16 22:12:25 crc kubenswrapper[4805]: I0216 22:12:25.937222 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5thhd" Feb 16 22:12:25 crc kubenswrapper[4805]: I0216 22:12:25.962767 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5thhd" event={"ID":"110bc074-f445-4e56-aa6c-0aba788756af","Type":"ContainerDied","Data":"a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f"} Feb 16 22:12:25 crc kubenswrapper[4805]: I0216 22:12:25.962818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5thhd" event={"ID":"110bc074-f445-4e56-aa6c-0aba788756af","Type":"ContainerDied","Data":"4f49c3317aa33d0d699b99caa80fe1e28c2e32322c53e90383a749c19f639634"} Feb 16 22:12:25 crc kubenswrapper[4805]: I0216 22:12:25.962844 4805 scope.go:117] "RemoveContainer" containerID="a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f" Feb 16 22:12:25 crc kubenswrapper[4805]: I0216 22:12:25.983843 4805 scope.go:117] "RemoveContainer" containerID="1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.008908 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-catalog-content\") pod \"110bc074-f445-4e56-aa6c-0aba788756af\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.014688 4805 scope.go:117] "RemoveContainer" containerID="ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.018790 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-utilities\") pod \"110bc074-f445-4e56-aa6c-0aba788756af\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.018883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5qrk\" (UniqueName: \"kubernetes.io/projected/110bc074-f445-4e56-aa6c-0aba788756af-kube-api-access-c5qrk\") pod \"110bc074-f445-4e56-aa6c-0aba788756af\" (UID: \"110bc074-f445-4e56-aa6c-0aba788756af\") " Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.022324 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-utilities" (OuterVolumeSpecName: "utilities") pod "110bc074-f445-4e56-aa6c-0aba788756af" (UID: "110bc074-f445-4e56-aa6c-0aba788756af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.027385 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110bc074-f445-4e56-aa6c-0aba788756af-kube-api-access-c5qrk" (OuterVolumeSpecName: "kube-api-access-c5qrk") pod "110bc074-f445-4e56-aa6c-0aba788756af" (UID: "110bc074-f445-4e56-aa6c-0aba788756af"). InnerVolumeSpecName "kube-api-access-c5qrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.067161 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "110bc074-f445-4e56-aa6c-0aba788756af" (UID: "110bc074-f445-4e56-aa6c-0aba788756af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.122806 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.122838 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5qrk\" (UniqueName: \"kubernetes.io/projected/110bc074-f445-4e56-aa6c-0aba788756af-kube-api-access-c5qrk\") on node \"crc\" DevicePath \"\"" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.122848 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/110bc074-f445-4e56-aa6c-0aba788756af-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.143768 4805 scope.go:117] "RemoveContainer" containerID="a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f" Feb 16 22:12:26 crc kubenswrapper[4805]: E0216 22:12:26.144443 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f\": container with ID starting with a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f not found: ID does not exist" containerID="a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.144622 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f"} err="failed to get container status \"a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f\": rpc error: code = NotFound desc = could not find container \"a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f\": container with ID starting with a6b9ea6401c19cc42024a9a2c3677d3c4efd5160a33bf74c6761ee8a9278433f not found: ID does not exist" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.144897 4805 scope.go:117] "RemoveContainer" containerID="1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9" Feb 16 22:12:26 crc kubenswrapper[4805]: E0216 22:12:26.145294 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9\": container with ID starting with 1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9 not found: ID does not exist" containerID="1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.145429 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9"} err="failed to get container status \"1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9\": rpc error: code = NotFound desc = could not find container \"1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9\": container with ID starting with 1d0c7e51bbf48ebf9732ddcebd0585d88b5ae854bdd747f7fc71ae6629543ec9 not found: ID does not exist" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.145527 4805 scope.go:117] "RemoveContainer" containerID="ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3" Feb 16 22:12:26 crc kubenswrapper[4805]: E0216 22:12:26.146194 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3\": container with ID starting with ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3 not found: ID does not exist" containerID="ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.146322 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3"} err="failed to get container status \"ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3\": rpc error: code = NotFound desc = could not find container \"ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3\": container with ID starting with ba7815e18251fcf134321e748d58f0b9ae16cf19375281b78b737d55cc4164c3 not found: ID does not exist" Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.289803 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5thhd"] Feb 16 22:12:26 crc kubenswrapper[4805]: I0216 22:12:26.303662 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5thhd"] Feb 16 22:12:27 crc kubenswrapper[4805]: I0216 22:12:27.627627 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="110bc074-f445-4e56-aa6c-0aba788756af" path="/var/lib/kubelet/pods/110bc074-f445-4e56-aa6c-0aba788756af/volumes" Feb 16 22:12:33 crc kubenswrapper[4805]: E0216 22:12:33.609661 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:12:34 crc kubenswrapper[4805]: E0216 22:12:34.735338 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:12:34 crc kubenswrapper[4805]: E0216 22:12:34.735775 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:12:34 crc kubenswrapper[4805]: E0216 22:12:34.735972 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:12:34 crc kubenswrapper[4805]: E0216 22:12:34.737234 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:12:38 crc kubenswrapper[4805]: I0216 22:12:38.358110 4805 scope.go:117] "RemoveContainer" containerID="4114d838c764885a0d9529fc7f5605c69a3adcf396c40f551d08f297ad1b5950" Feb 16 22:12:38 crc kubenswrapper[4805]: I0216 22:12:38.394658 4805 scope.go:117] "RemoveContainer" containerID="c71026f0410f5cb37b31cde957f1643bb2fab824e64ba0d03e2308b7678f529e" Feb 16 22:12:38 crc kubenswrapper[4805]: I0216 22:12:38.445703 4805 scope.go:117] "RemoveContainer" containerID="5b5dbce27dda2338802f050f52f3f7031fcba59c6824d40d256a8f499749c9e8" Feb 16 22:12:45 crc kubenswrapper[4805]: E0216 22:12:45.600816 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:12:48 crc kubenswrapper[4805]: E0216 22:12:48.599426 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:12:59 crc kubenswrapper[4805]: I0216 22:12:59.120238 4805 trace.go:236] Trace[805091415]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (16-Feb-2026 22:12:58.070) (total time: 1048ms): Feb 16 22:12:59 crc kubenswrapper[4805]: Trace[805091415]: [1.048055958s] [1.048055958s] END Feb 16 22:13:00 crc kubenswrapper[4805]: E0216 22:13:00.603893 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:13:03 crc kubenswrapper[4805]: E0216 22:13:03.611428 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.443963 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-72wrz"] Feb 16 22:13:05 crc kubenswrapper[4805]: E0216 22:13:05.444887 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110bc074-f445-4e56-aa6c-0aba788756af" containerName="extract-content" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.444908 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="110bc074-f445-4e56-aa6c-0aba788756af" containerName="extract-content" Feb 16 22:13:05 crc kubenswrapper[4805]: E0216 22:13:05.444946 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110bc074-f445-4e56-aa6c-0aba788756af" containerName="registry-server" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.444955 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="110bc074-f445-4e56-aa6c-0aba788756af" containerName="registry-server" Feb 16 22:13:05 crc kubenswrapper[4805]: E0216 22:13:05.444994 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110bc074-f445-4e56-aa6c-0aba788756af" containerName="extract-utilities" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.445003 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="110bc074-f445-4e56-aa6c-0aba788756af" containerName="extract-utilities" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.445301 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="110bc074-f445-4e56-aa6c-0aba788756af" containerName="registry-server" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.447514 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.456300 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72wrz"] Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.507882 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sx4s\" (UniqueName: \"kubernetes.io/projected/13845c1d-f328-4e36-9f1d-db8c33bffcde-kube-api-access-6sx4s\") pod \"redhat-operators-72wrz\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.507941 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-catalog-content\") pod \"redhat-operators-72wrz\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.508028 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-utilities\") pod \"redhat-operators-72wrz\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.611539 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sx4s\" (UniqueName: \"kubernetes.io/projected/13845c1d-f328-4e36-9f1d-db8c33bffcde-kube-api-access-6sx4s\") pod \"redhat-operators-72wrz\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.611618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-catalog-content\") pod \"redhat-operators-72wrz\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.612174 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-catalog-content\") pod \"redhat-operators-72wrz\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.612366 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-utilities\") pod \"redhat-operators-72wrz\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.612674 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-utilities\") pod \"redhat-operators-72wrz\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.644939 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sx4s\" (UniqueName: \"kubernetes.io/projected/13845c1d-f328-4e36-9f1d-db8c33bffcde-kube-api-access-6sx4s\") pod \"redhat-operators-72wrz\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:05 crc kubenswrapper[4805]: I0216 22:13:05.773424 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:06 crc kubenswrapper[4805]: I0216 22:13:06.293051 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72wrz"] Feb 16 22:13:06 crc kubenswrapper[4805]: W0216 22:13:06.299894 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13845c1d_f328_4e36_9f1d_db8c33bffcde.slice/crio-299f7b05d1727bffad147d291ad2db9239756e0f4b661449067d2a414cd74b0b WatchSource:0}: Error finding container 299f7b05d1727bffad147d291ad2db9239756e0f4b661449067d2a414cd74b0b: Status 404 returned error can't find the container with id 299f7b05d1727bffad147d291ad2db9239756e0f4b661449067d2a414cd74b0b Feb 16 22:13:06 crc kubenswrapper[4805]: I0216 22:13:06.383340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72wrz" event={"ID":"13845c1d-f328-4e36-9f1d-db8c33bffcde","Type":"ContainerStarted","Data":"299f7b05d1727bffad147d291ad2db9239756e0f4b661449067d2a414cd74b0b"} Feb 16 22:13:07 crc kubenswrapper[4805]: I0216 22:13:07.398072 4805 generic.go:334] "Generic (PLEG): container finished" podID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerID="c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020" exitCode=0 Feb 16 22:13:07 crc kubenswrapper[4805]: I0216 22:13:07.398122 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72wrz" event={"ID":"13845c1d-f328-4e36-9f1d-db8c33bffcde","Type":"ContainerDied","Data":"c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020"} Feb 16 22:13:09 crc kubenswrapper[4805]: I0216 22:13:09.421662 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72wrz" event={"ID":"13845c1d-f328-4e36-9f1d-db8c33bffcde","Type":"ContainerStarted","Data":"61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3"} Feb 16 22:13:14 crc kubenswrapper[4805]: I0216 22:13:14.476378 4805 generic.go:334] "Generic (PLEG): container finished" podID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerID="61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3" exitCode=0 Feb 16 22:13:14 crc kubenswrapper[4805]: I0216 22:13:14.476421 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72wrz" event={"ID":"13845c1d-f328-4e36-9f1d-db8c33bffcde","Type":"ContainerDied","Data":"61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3"} Feb 16 22:13:15 crc kubenswrapper[4805]: I0216 22:13:15.489544 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72wrz" event={"ID":"13845c1d-f328-4e36-9f1d-db8c33bffcde","Type":"ContainerStarted","Data":"6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb"} Feb 16 22:13:15 crc kubenswrapper[4805]: I0216 22:13:15.512111 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-72wrz" podStartSLOduration=2.897867689 podStartE2EDuration="10.512086222s" podCreationTimestamp="2026-02-16 22:13:05 +0000 UTC" firstStartedPulling="2026-02-16 22:13:07.400578853 +0000 UTC m=+4605.219262138" lastFinishedPulling="2026-02-16 22:13:15.014797386 +0000 UTC m=+4612.833480671" observedRunningTime="2026-02-16 22:13:15.5079088 +0000 UTC m=+4613.326592105" watchObservedRunningTime="2026-02-16 22:13:15.512086222 +0000 UTC m=+4613.330769527" Feb 16 22:13:15 crc kubenswrapper[4805]: E0216 22:13:15.600231 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:13:15 crc kubenswrapper[4805]: I0216 22:13:15.775326 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:15 crc kubenswrapper[4805]: I0216 22:13:15.775377 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:16 crc kubenswrapper[4805]: E0216 22:13:16.599214 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:13:16 crc kubenswrapper[4805]: I0216 22:13:16.820784 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-72wrz" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerName="registry-server" probeResult="failure" output=< Feb 16 22:13:16 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 22:13:16 crc kubenswrapper[4805]: > Feb 16 22:13:25 crc kubenswrapper[4805]: I0216 22:13:25.823602 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:25 crc kubenswrapper[4805]: I0216 22:13:25.873210 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:26 crc kubenswrapper[4805]: I0216 22:13:26.071769 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72wrz"] Feb 16 22:13:27 crc kubenswrapper[4805]: I0216 22:13:27.624858 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-72wrz" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerName="registry-server" containerID="cri-o://6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb" gracePeriod=2 Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.259255 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.398768 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-utilities\") pod \"13845c1d-f328-4e36-9f1d-db8c33bffcde\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.399011 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-catalog-content\") pod \"13845c1d-f328-4e36-9f1d-db8c33bffcde\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.399067 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sx4s\" (UniqueName: \"kubernetes.io/projected/13845c1d-f328-4e36-9f1d-db8c33bffcde-kube-api-access-6sx4s\") pod \"13845c1d-f328-4e36-9f1d-db8c33bffcde\" (UID: \"13845c1d-f328-4e36-9f1d-db8c33bffcde\") " Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.399947 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-utilities" (OuterVolumeSpecName: "utilities") pod "13845c1d-f328-4e36-9f1d-db8c33bffcde" (UID: "13845c1d-f328-4e36-9f1d-db8c33bffcde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.406157 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13845c1d-f328-4e36-9f1d-db8c33bffcde-kube-api-access-6sx4s" (OuterVolumeSpecName: "kube-api-access-6sx4s") pod "13845c1d-f328-4e36-9f1d-db8c33bffcde" (UID: "13845c1d-f328-4e36-9f1d-db8c33bffcde"). InnerVolumeSpecName "kube-api-access-6sx4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.501871 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sx4s\" (UniqueName: \"kubernetes.io/projected/13845c1d-f328-4e36-9f1d-db8c33bffcde-kube-api-access-6sx4s\") on node \"crc\" DevicePath \"\"" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.501909 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.528259 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13845c1d-f328-4e36-9f1d-db8c33bffcde" (UID: "13845c1d-f328-4e36-9f1d-db8c33bffcde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.605114 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13845c1d-f328-4e36-9f1d-db8c33bffcde-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.642171 4805 generic.go:334] "Generic (PLEG): container finished" podID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerID="6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb" exitCode=0 Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.642212 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72wrz" event={"ID":"13845c1d-f328-4e36-9f1d-db8c33bffcde","Type":"ContainerDied","Data":"6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb"} Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.642265 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72wrz" event={"ID":"13845c1d-f328-4e36-9f1d-db8c33bffcde","Type":"ContainerDied","Data":"299f7b05d1727bffad147d291ad2db9239756e0f4b661449067d2a414cd74b0b"} Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.642301 4805 scope.go:117] "RemoveContainer" containerID="6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.642350 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72wrz" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.677206 4805 scope.go:117] "RemoveContainer" containerID="61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.699177 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72wrz"] Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.708699 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-72wrz"] Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.726306 4805 scope.go:117] "RemoveContainer" containerID="c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.790227 4805 scope.go:117] "RemoveContainer" containerID="6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb" Feb 16 22:13:28 crc kubenswrapper[4805]: E0216 22:13:28.790874 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb\": container with ID starting with 6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb not found: ID does not exist" containerID="6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.790904 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb"} err="failed to get container status \"6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb\": rpc error: code = NotFound desc = could not find container \"6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb\": container with ID starting with 6b86413696dab12fda656fc1252558a3ee6e057672ea43466839e03102439fbb not found: ID does not exist" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.790922 4805 scope.go:117] "RemoveContainer" containerID="61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3" Feb 16 22:13:28 crc kubenswrapper[4805]: E0216 22:13:28.791331 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3\": container with ID starting with 61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3 not found: ID does not exist" containerID="61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.791390 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3"} err="failed to get container status \"61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3\": rpc error: code = NotFound desc = could not find container \"61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3\": container with ID starting with 61ad8cc514f51a494d9956ea8912144405fd77453c39b844a3bd88c0b1808cc3 not found: ID does not exist" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.791418 4805 scope.go:117] "RemoveContainer" containerID="c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020" Feb 16 22:13:28 crc kubenswrapper[4805]: E0216 22:13:28.791851 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020\": container with ID starting with c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020 not found: ID does not exist" containerID="c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020" Feb 16 22:13:28 crc kubenswrapper[4805]: I0216 22:13:28.791881 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020"} err="failed to get container status \"c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020\": rpc error: code = NotFound desc = could not find container \"c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020\": container with ID starting with c7627d3a2e5f3e903985261d7f83391e6e29a94fc08784ed4ca1ec2e8aba2020 not found: ID does not exist" Feb 16 22:13:29 crc kubenswrapper[4805]: E0216 22:13:29.600105 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:13:29 crc kubenswrapper[4805]: I0216 22:13:29.627140 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" path="/var/lib/kubelet/pods/13845c1d-f328-4e36-9f1d-db8c33bffcde/volumes" Feb 16 22:13:31 crc kubenswrapper[4805]: E0216 22:13:31.600095 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:13:42 crc kubenswrapper[4805]: E0216 22:13:42.601823 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:13:46 crc kubenswrapper[4805]: E0216 22:13:46.601854 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.031505 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm"] Feb 16 22:13:48 crc kubenswrapper[4805]: E0216 22:13:48.032782 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerName="registry-server" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.032797 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerName="registry-server" Feb 16 22:13:48 crc kubenswrapper[4805]: E0216 22:13:48.032808 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerName="extract-content" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.032814 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerName="extract-content" Feb 16 22:13:48 crc kubenswrapper[4805]: E0216 22:13:48.032850 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerName="extract-utilities" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.032856 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerName="extract-utilities" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.033087 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="13845c1d-f328-4e36-9f1d-db8c33bffcde" containerName="registry-server" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.034341 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.038975 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.039506 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.039853 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.040195 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46tr9" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.068998 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm"] Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.141104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8msrm\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.141147 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8msrm\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.141215 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98gv4\" (UniqueName: \"kubernetes.io/projected/712b6325-4e7e-4557-ba00-fdab4a8e3f79-kube-api-access-98gv4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8msrm\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.242948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8msrm\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.243231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98gv4\" (UniqueName: \"kubernetes.io/projected/712b6325-4e7e-4557-ba00-fdab4a8e3f79-kube-api-access-98gv4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8msrm\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.243481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8msrm\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.248896 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8msrm\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.248909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8msrm\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.264221 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98gv4\" (UniqueName: \"kubernetes.io/projected/712b6325-4e7e-4557-ba00-fdab4a8e3f79-kube-api-access-98gv4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8msrm\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.362961 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:13:48 crc kubenswrapper[4805]: I0216 22:13:48.943978 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm"] Feb 16 22:13:49 crc kubenswrapper[4805]: W0216 22:13:49.212645 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod712b6325_4e7e_4557_ba00_fdab4a8e3f79.slice/crio-8511a497129d52890ee74c07e06cf90837e15d6f1b58811d5d6278040b577256 WatchSource:0}: Error finding container 8511a497129d52890ee74c07e06cf90837e15d6f1b58811d5d6278040b577256: Status 404 returned error can't find the container with id 8511a497129d52890ee74c07e06cf90837e15d6f1b58811d5d6278040b577256 Feb 16 22:13:49 crc kubenswrapper[4805]: I0216 22:13:49.924024 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" event={"ID":"712b6325-4e7e-4557-ba00-fdab4a8e3f79","Type":"ContainerStarted","Data":"8511a497129d52890ee74c07e06cf90837e15d6f1b58811d5d6278040b577256"} Feb 16 22:13:50 crc kubenswrapper[4805]: I0216 22:13:50.945066 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" event={"ID":"712b6325-4e7e-4557-ba00-fdab4a8e3f79","Type":"ContainerStarted","Data":"7bcc71e0167cadbfdb27b0a317cfb613764d5e8c65d2f26ab6068e06a2df3f84"} Feb 16 22:13:50 crc kubenswrapper[4805]: I0216 22:13:50.968050 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" podStartSLOduration=2.482712945 podStartE2EDuration="2.968032381s" podCreationTimestamp="2026-02-16 22:13:48 +0000 UTC" firstStartedPulling="2026-02-16 22:13:49.215474029 +0000 UTC m=+4647.034157324" lastFinishedPulling="2026-02-16 22:13:49.700793465 +0000 UTC m=+4647.519476760" observedRunningTime="2026-02-16 22:13:50.965287687 +0000 UTC m=+4648.783971002" watchObservedRunningTime="2026-02-16 22:13:50.968032381 +0000 UTC m=+4648.786715676" Feb 16 22:13:56 crc kubenswrapper[4805]: E0216 22:13:56.601447 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:14:00 crc kubenswrapper[4805]: E0216 22:14:00.602009 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:14:08 crc kubenswrapper[4805]: E0216 22:14:08.601299 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:14:12 crc kubenswrapper[4805]: E0216 22:14:12.601380 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:14:20 crc kubenswrapper[4805]: E0216 22:14:20.600323 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:14:24 crc kubenswrapper[4805]: E0216 22:14:24.600183 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:14:32 crc kubenswrapper[4805]: I0216 22:14:32.003008 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7688d557bc-2jgzd" podUID="95ea5d76-aedb-4a0a-a03d-fdc9140265e4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 22:14:35 crc kubenswrapper[4805]: E0216 22:14:35.600412 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:14:37 crc kubenswrapper[4805]: E0216 22:14:37.599657 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:14:38 crc kubenswrapper[4805]: I0216 22:14:38.099678 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:14:38 crc kubenswrapper[4805]: I0216 22:14:38.099809 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:14:46 crc kubenswrapper[4805]: E0216 22:14:46.600359 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:14:50 crc kubenswrapper[4805]: E0216 22:14:50.600367 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:14:57 crc kubenswrapper[4805]: E0216 22:14:57.600376 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.174025 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j"] Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.176754 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.179398 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.179742 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.185012 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j"] Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.300286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bacc0a7-c880-48be-a508-5181b7313e0b-config-volume\") pod \"collect-profiles-29521335-7vf2j\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.300364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bacc0a7-c880-48be-a508-5181b7313e0b-secret-volume\") pod \"collect-profiles-29521335-7vf2j\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.300630 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk659\" (UniqueName: \"kubernetes.io/projected/0bacc0a7-c880-48be-a508-5181b7313e0b-kube-api-access-zk659\") pod \"collect-profiles-29521335-7vf2j\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.402282 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bacc0a7-c880-48be-a508-5181b7313e0b-config-volume\") pod \"collect-profiles-29521335-7vf2j\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.402358 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bacc0a7-c880-48be-a508-5181b7313e0b-secret-volume\") pod \"collect-profiles-29521335-7vf2j\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.402619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk659\" (UniqueName: \"kubernetes.io/projected/0bacc0a7-c880-48be-a508-5181b7313e0b-kube-api-access-zk659\") pod \"collect-profiles-29521335-7vf2j\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.403246 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bacc0a7-c880-48be-a508-5181b7313e0b-config-volume\") pod \"collect-profiles-29521335-7vf2j\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.409024 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bacc0a7-c880-48be-a508-5181b7313e0b-secret-volume\") pod \"collect-profiles-29521335-7vf2j\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.418947 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk659\" (UniqueName: \"kubernetes.io/projected/0bacc0a7-c880-48be-a508-5181b7313e0b-kube-api-access-zk659\") pod \"collect-profiles-29521335-7vf2j\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:00 crc kubenswrapper[4805]: I0216 22:15:00.501662 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:01 crc kubenswrapper[4805]: I0216 22:15:01.022619 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j"] Feb 16 22:15:01 crc kubenswrapper[4805]: I0216 22:15:01.794481 4805 generic.go:334] "Generic (PLEG): container finished" podID="0bacc0a7-c880-48be-a508-5181b7313e0b" containerID="f8e4a7fa8b9472ddb37f37bd5237930d9f4c5dfb06eaeda2722f8477d1063259" exitCode=0 Feb 16 22:15:01 crc kubenswrapper[4805]: I0216 22:15:01.794589 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" event={"ID":"0bacc0a7-c880-48be-a508-5181b7313e0b","Type":"ContainerDied","Data":"f8e4a7fa8b9472ddb37f37bd5237930d9f4c5dfb06eaeda2722f8477d1063259"} Feb 16 22:15:01 crc kubenswrapper[4805]: I0216 22:15:01.794777 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" event={"ID":"0bacc0a7-c880-48be-a508-5181b7313e0b","Type":"ContainerStarted","Data":"d64fb242d4b63619212bc4f9898adb1895fe305707428e909b1b56fe220b9b0a"} Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.322504 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.417699 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bacc0a7-c880-48be-a508-5181b7313e0b-secret-volume\") pod \"0bacc0a7-c880-48be-a508-5181b7313e0b\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.417894 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk659\" (UniqueName: \"kubernetes.io/projected/0bacc0a7-c880-48be-a508-5181b7313e0b-kube-api-access-zk659\") pod \"0bacc0a7-c880-48be-a508-5181b7313e0b\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.418068 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bacc0a7-c880-48be-a508-5181b7313e0b-config-volume\") pod \"0bacc0a7-c880-48be-a508-5181b7313e0b\" (UID: \"0bacc0a7-c880-48be-a508-5181b7313e0b\") " Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.420749 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bacc0a7-c880-48be-a508-5181b7313e0b-config-volume" (OuterVolumeSpecName: "config-volume") pod "0bacc0a7-c880-48be-a508-5181b7313e0b" (UID: "0bacc0a7-c880-48be-a508-5181b7313e0b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.425258 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bacc0a7-c880-48be-a508-5181b7313e0b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0bacc0a7-c880-48be-a508-5181b7313e0b" (UID: "0bacc0a7-c880-48be-a508-5181b7313e0b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.427951 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bacc0a7-c880-48be-a508-5181b7313e0b-kube-api-access-zk659" (OuterVolumeSpecName: "kube-api-access-zk659") pod "0bacc0a7-c880-48be-a508-5181b7313e0b" (UID: "0bacc0a7-c880-48be-a508-5181b7313e0b"). InnerVolumeSpecName "kube-api-access-zk659". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.520325 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk659\" (UniqueName: \"kubernetes.io/projected/0bacc0a7-c880-48be-a508-5181b7313e0b-kube-api-access-zk659\") on node \"crc\" DevicePath \"\"" Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.520356 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bacc0a7-c880-48be-a508-5181b7313e0b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.520365 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bacc0a7-c880-48be-a508-5181b7313e0b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.857843 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" event={"ID":"0bacc0a7-c880-48be-a508-5181b7313e0b","Type":"ContainerDied","Data":"d64fb242d4b63619212bc4f9898adb1895fe305707428e909b1b56fe220b9b0a"} Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.858209 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d64fb242d4b63619212bc4f9898adb1895fe305707428e909b1b56fe220b9b0a" Feb 16 22:15:03 crc kubenswrapper[4805]: I0216 22:15:03.858261 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-7vf2j" Feb 16 22:15:04 crc kubenswrapper[4805]: I0216 22:15:04.413810 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq"] Feb 16 22:15:04 crc kubenswrapper[4805]: I0216 22:15:04.422864 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-rpxhq"] Feb 16 22:15:05 crc kubenswrapper[4805]: E0216 22:15:05.601836 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:15:05 crc kubenswrapper[4805]: I0216 22:15:05.617713 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc" path="/var/lib/kubelet/pods/d6f857ec-f8a8-4b15-bb67-4fc1f0ba0ecc/volumes" Feb 16 22:15:08 crc kubenswrapper[4805]: I0216 22:15:08.099988 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:15:08 crc kubenswrapper[4805]: I0216 22:15:08.100628 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:15:12 crc kubenswrapper[4805]: E0216 22:15:12.603057 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.805116 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4hnz"] Feb 16 22:15:17 crc kubenswrapper[4805]: E0216 22:15:17.808380 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bacc0a7-c880-48be-a508-5181b7313e0b" containerName="collect-profiles" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.808415 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bacc0a7-c880-48be-a508-5181b7313e0b" containerName="collect-profiles" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.808881 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bacc0a7-c880-48be-a508-5181b7313e0b" containerName="collect-profiles" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.811552 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.825035 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4hnz"] Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.893976 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt76j\" (UniqueName: \"kubernetes.io/projected/b827dfdd-28db-421e-b55b-659784f34816-kube-api-access-vt76j\") pod \"redhat-marketplace-j4hnz\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.894071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-catalog-content\") pod \"redhat-marketplace-j4hnz\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.894097 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-utilities\") pod \"redhat-marketplace-j4hnz\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.996572 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt76j\" (UniqueName: \"kubernetes.io/projected/b827dfdd-28db-421e-b55b-659784f34816-kube-api-access-vt76j\") pod \"redhat-marketplace-j4hnz\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.996674 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-catalog-content\") pod \"redhat-marketplace-j4hnz\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.996701 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-utilities\") pod \"redhat-marketplace-j4hnz\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.997346 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-utilities\") pod \"redhat-marketplace-j4hnz\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:17 crc kubenswrapper[4805]: I0216 22:15:17.997433 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-catalog-content\") pod \"redhat-marketplace-j4hnz\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:18 crc kubenswrapper[4805]: I0216 22:15:18.017603 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt76j\" (UniqueName: \"kubernetes.io/projected/b827dfdd-28db-421e-b55b-659784f34816-kube-api-access-vt76j\") pod \"redhat-marketplace-j4hnz\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:18 crc kubenswrapper[4805]: I0216 22:15:18.176404 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:18 crc kubenswrapper[4805]: I0216 22:15:18.741563 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4hnz"] Feb 16 22:15:19 crc kubenswrapper[4805]: I0216 22:15:19.045751 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4hnz" event={"ID":"b827dfdd-28db-421e-b55b-659784f34816","Type":"ContainerStarted","Data":"97d1d187cb23397ad30c3bda29beb72a6cb25d130962783756817598636012d7"} Feb 16 22:15:19 crc kubenswrapper[4805]: E0216 22:15:19.599611 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:15:20 crc kubenswrapper[4805]: I0216 22:15:20.057550 4805 generic.go:334] "Generic (PLEG): container finished" podID="b827dfdd-28db-421e-b55b-659784f34816" containerID="5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89" exitCode=0 Feb 16 22:15:20 crc kubenswrapper[4805]: I0216 22:15:20.057612 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4hnz" event={"ID":"b827dfdd-28db-421e-b55b-659784f34816","Type":"ContainerDied","Data":"5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89"} Feb 16 22:15:21 crc kubenswrapper[4805]: I0216 22:15:21.069447 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4hnz" event={"ID":"b827dfdd-28db-421e-b55b-659784f34816","Type":"ContainerStarted","Data":"5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d"} Feb 16 22:15:22 crc kubenswrapper[4805]: I0216 22:15:22.081062 4805 generic.go:334] "Generic (PLEG): container finished" podID="b827dfdd-28db-421e-b55b-659784f34816" containerID="5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d" exitCode=0 Feb 16 22:15:22 crc kubenswrapper[4805]: I0216 22:15:22.081208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4hnz" event={"ID":"b827dfdd-28db-421e-b55b-659784f34816","Type":"ContainerDied","Data":"5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d"} Feb 16 22:15:23 crc kubenswrapper[4805]: I0216 22:15:23.096566 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4hnz" event={"ID":"b827dfdd-28db-421e-b55b-659784f34816","Type":"ContainerStarted","Data":"c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab"} Feb 16 22:15:23 crc kubenswrapper[4805]: I0216 22:15:23.124232 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j4hnz" podStartSLOduration=3.718128191 podStartE2EDuration="6.124210899s" podCreationTimestamp="2026-02-16 22:15:17 +0000 UTC" firstStartedPulling="2026-02-16 22:15:20.059397605 +0000 UTC m=+4737.878080920" lastFinishedPulling="2026-02-16 22:15:22.465480333 +0000 UTC m=+4740.284163628" observedRunningTime="2026-02-16 22:15:23.122144934 +0000 UTC m=+4740.940828229" watchObservedRunningTime="2026-02-16 22:15:23.124210899 +0000 UTC m=+4740.942894184" Feb 16 22:15:27 crc kubenswrapper[4805]: E0216 22:15:27.600227 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:15:28 crc kubenswrapper[4805]: I0216 22:15:28.177967 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:28 crc kubenswrapper[4805]: I0216 22:15:28.178297 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:28 crc kubenswrapper[4805]: I0216 22:15:28.567844 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:29 crc kubenswrapper[4805]: I0216 22:15:29.251541 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:29 crc kubenswrapper[4805]: I0216 22:15:29.310895 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4hnz"] Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.194295 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j4hnz" podUID="b827dfdd-28db-421e-b55b-659784f34816" containerName="registry-server" containerID="cri-o://c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab" gracePeriod=2 Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.783109 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.872089 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-catalog-content\") pod \"b827dfdd-28db-421e-b55b-659784f34816\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.872571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-utilities\") pod \"b827dfdd-28db-421e-b55b-659784f34816\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.872718 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt76j\" (UniqueName: \"kubernetes.io/projected/b827dfdd-28db-421e-b55b-659784f34816-kube-api-access-vt76j\") pod \"b827dfdd-28db-421e-b55b-659784f34816\" (UID: \"b827dfdd-28db-421e-b55b-659784f34816\") " Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.873181 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-utilities" (OuterVolumeSpecName: "utilities") pod "b827dfdd-28db-421e-b55b-659784f34816" (UID: "b827dfdd-28db-421e-b55b-659784f34816"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.873475 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.881950 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b827dfdd-28db-421e-b55b-659784f34816-kube-api-access-vt76j" (OuterVolumeSpecName: "kube-api-access-vt76j") pod "b827dfdd-28db-421e-b55b-659784f34816" (UID: "b827dfdd-28db-421e-b55b-659784f34816"). InnerVolumeSpecName "kube-api-access-vt76j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.901640 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b827dfdd-28db-421e-b55b-659784f34816" (UID: "b827dfdd-28db-421e-b55b-659784f34816"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.975165 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt76j\" (UniqueName: \"kubernetes.io/projected/b827dfdd-28db-421e-b55b-659784f34816-kube-api-access-vt76j\") on node \"crc\" DevicePath \"\"" Feb 16 22:15:31 crc kubenswrapper[4805]: I0216 22:15:31.975394 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b827dfdd-28db-421e-b55b-659784f34816-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.211603 4805 generic.go:334] "Generic (PLEG): container finished" podID="b827dfdd-28db-421e-b55b-659784f34816" containerID="c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab" exitCode=0 Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.211699 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4hnz" event={"ID":"b827dfdd-28db-421e-b55b-659784f34816","Type":"ContainerDied","Data":"c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab"} Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.211836 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4hnz" event={"ID":"b827dfdd-28db-421e-b55b-659784f34816","Type":"ContainerDied","Data":"97d1d187cb23397ad30c3bda29beb72a6cb25d130962783756817598636012d7"} Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.211869 4805 scope.go:117] "RemoveContainer" containerID="c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.215427 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4hnz" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.237259 4805 scope.go:117] "RemoveContainer" containerID="5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.273827 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4hnz"] Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.284280 4805 scope.go:117] "RemoveContainer" containerID="5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.287474 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4hnz"] Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.323948 4805 scope.go:117] "RemoveContainer" containerID="c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab" Feb 16 22:15:32 crc kubenswrapper[4805]: E0216 22:15:32.324420 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab\": container with ID starting with c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab not found: ID does not exist" containerID="c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.324449 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab"} err="failed to get container status \"c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab\": rpc error: code = NotFound desc = could not find container \"c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab\": container with ID starting with c1b40c84f3a5bddebd2e13acf94bac72e2bed71d50fbacc8dd4d29ec1feab2ab not found: ID does not exist" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.324469 4805 scope.go:117] "RemoveContainer" containerID="5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d" Feb 16 22:15:32 crc kubenswrapper[4805]: E0216 22:15:32.324844 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d\": container with ID starting with 5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d not found: ID does not exist" containerID="5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.324895 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d"} err="failed to get container status \"5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d\": rpc error: code = NotFound desc = could not find container \"5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d\": container with ID starting with 5d395ffff686a7e07fb82ee662b9e00284a01d6eb95be00cb8153a546ad4a96d not found: ID does not exist" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.324950 4805 scope.go:117] "RemoveContainer" containerID="5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89" Feb 16 22:15:32 crc kubenswrapper[4805]: E0216 22:15:32.325323 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89\": container with ID starting with 5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89 not found: ID does not exist" containerID="5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89" Feb 16 22:15:32 crc kubenswrapper[4805]: I0216 22:15:32.325371 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89"} err="failed to get container status \"5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89\": rpc error: code = NotFound desc = could not find container \"5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89\": container with ID starting with 5793c92e5396e9baf9ec66a0312992591db0a273ab8247859708a4f9e564ed89 not found: ID does not exist" Feb 16 22:15:33 crc kubenswrapper[4805]: E0216 22:15:33.615455 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:15:33 crc kubenswrapper[4805]: I0216 22:15:33.642396 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b827dfdd-28db-421e-b55b-659784f34816" path="/var/lib/kubelet/pods/b827dfdd-28db-421e-b55b-659784f34816/volumes" Feb 16 22:15:38 crc kubenswrapper[4805]: I0216 22:15:38.099141 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:15:38 crc kubenswrapper[4805]: I0216 22:15:38.099765 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:15:38 crc kubenswrapper[4805]: I0216 22:15:38.099829 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 22:15:38 crc kubenswrapper[4805]: I0216 22:15:38.100883 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f87facc416a37b89d559829757c6df4025c9840a8cbd0b6efa4bdf4f3e6d1208"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:15:38 crc kubenswrapper[4805]: I0216 22:15:38.100950 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://f87facc416a37b89d559829757c6df4025c9840a8cbd0b6efa4bdf4f3e6d1208" gracePeriod=600 Feb 16 22:15:38 crc kubenswrapper[4805]: I0216 22:15:38.314555 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="f87facc416a37b89d559829757c6df4025c9840a8cbd0b6efa4bdf4f3e6d1208" exitCode=0 Feb 16 22:15:38 crc kubenswrapper[4805]: I0216 22:15:38.314600 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"f87facc416a37b89d559829757c6df4025c9840a8cbd0b6efa4bdf4f3e6d1208"} Feb 16 22:15:38 crc kubenswrapper[4805]: I0216 22:15:38.314645 4805 scope.go:117] "RemoveContainer" containerID="dc1c25cb90cddb31897add5ddf44d9b94fcb348fa0a8aac5eadf22421b1a24fd" Feb 16 22:15:38 crc kubenswrapper[4805]: I0216 22:15:38.603257 4805 scope.go:117] "RemoveContainer" containerID="e6cb580316b32dff7e52490466991c2810d78f5397b32758285d0ef81c36c263" Feb 16 22:15:39 crc kubenswrapper[4805]: I0216 22:15:39.325906 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45"} Feb 16 22:15:39 crc kubenswrapper[4805]: E0216 22:15:39.600851 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:15:48 crc kubenswrapper[4805]: E0216 22:15:48.601018 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:15:52 crc kubenswrapper[4805]: E0216 22:15:52.600284 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:16:00 crc kubenswrapper[4805]: E0216 22:16:00.602113 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:16:05 crc kubenswrapper[4805]: E0216 22:16:05.601193 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:16:11 crc kubenswrapper[4805]: E0216 22:16:11.604625 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:16:16 crc kubenswrapper[4805]: E0216 22:16:16.601119 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:16:22 crc kubenswrapper[4805]: E0216 22:16:22.599964 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:16:30 crc kubenswrapper[4805]: E0216 22:16:30.600514 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.185577 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-69gmg"] Feb 16 22:16:31 crc kubenswrapper[4805]: E0216 22:16:31.186403 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b827dfdd-28db-421e-b55b-659784f34816" containerName="registry-server" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.186425 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b827dfdd-28db-421e-b55b-659784f34816" containerName="registry-server" Feb 16 22:16:31 crc kubenswrapper[4805]: E0216 22:16:31.186458 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b827dfdd-28db-421e-b55b-659784f34816" containerName="extract-content" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.186465 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b827dfdd-28db-421e-b55b-659784f34816" containerName="extract-content" Feb 16 22:16:31 crc kubenswrapper[4805]: E0216 22:16:31.186488 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b827dfdd-28db-421e-b55b-659784f34816" containerName="extract-utilities" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.186495 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b827dfdd-28db-421e-b55b-659784f34816" containerName="extract-utilities" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.186690 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b827dfdd-28db-421e-b55b-659784f34816" containerName="registry-server" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.188965 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.202452 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69gmg"] Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.355255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnhtk\" (UniqueName: \"kubernetes.io/projected/46065cb0-1e01-42af-bf88-0899a9581e69-kube-api-access-tnhtk\") pod \"community-operators-69gmg\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.355352 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-utilities\") pod \"community-operators-69gmg\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.355572 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-catalog-content\") pod \"community-operators-69gmg\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.458393 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-catalog-content\") pod \"community-operators-69gmg\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.458632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnhtk\" (UniqueName: \"kubernetes.io/projected/46065cb0-1e01-42af-bf88-0899a9581e69-kube-api-access-tnhtk\") pod \"community-operators-69gmg\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.458706 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-utilities\") pod \"community-operators-69gmg\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.459012 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-catalog-content\") pod \"community-operators-69gmg\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.459200 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-utilities\") pod \"community-operators-69gmg\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.486915 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnhtk\" (UniqueName: \"kubernetes.io/projected/46065cb0-1e01-42af-bf88-0899a9581e69-kube-api-access-tnhtk\") pod \"community-operators-69gmg\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:31 crc kubenswrapper[4805]: I0216 22:16:31.529586 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:32 crc kubenswrapper[4805]: I0216 22:16:32.024941 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69gmg"] Feb 16 22:16:32 crc kubenswrapper[4805]: I0216 22:16:32.920512 4805 generic.go:334] "Generic (PLEG): container finished" podID="46065cb0-1e01-42af-bf88-0899a9581e69" containerID="3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec" exitCode=0 Feb 16 22:16:32 crc kubenswrapper[4805]: I0216 22:16:32.920587 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69gmg" event={"ID":"46065cb0-1e01-42af-bf88-0899a9581e69","Type":"ContainerDied","Data":"3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec"} Feb 16 22:16:32 crc kubenswrapper[4805]: I0216 22:16:32.920871 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69gmg" event={"ID":"46065cb0-1e01-42af-bf88-0899a9581e69","Type":"ContainerStarted","Data":"17a99291be9e79cb1f407edb9e2eeac584648c5aabff4de91a23282d635c4f15"} Feb 16 22:16:34 crc kubenswrapper[4805]: E0216 22:16:34.599536 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:16:34 crc kubenswrapper[4805]: I0216 22:16:34.943350 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69gmg" event={"ID":"46065cb0-1e01-42af-bf88-0899a9581e69","Type":"ContainerStarted","Data":"3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4"} Feb 16 22:16:35 crc kubenswrapper[4805]: I0216 22:16:35.955769 4805 generic.go:334] "Generic (PLEG): container finished" podID="46065cb0-1e01-42af-bf88-0899a9581e69" containerID="3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4" exitCode=0 Feb 16 22:16:35 crc kubenswrapper[4805]: I0216 22:16:35.955837 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69gmg" event={"ID":"46065cb0-1e01-42af-bf88-0899a9581e69","Type":"ContainerDied","Data":"3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4"} Feb 16 22:16:36 crc kubenswrapper[4805]: I0216 22:16:36.966774 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69gmg" event={"ID":"46065cb0-1e01-42af-bf88-0899a9581e69","Type":"ContainerStarted","Data":"b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28"} Feb 16 22:16:36 crc kubenswrapper[4805]: I0216 22:16:36.997399 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-69gmg" podStartSLOduration=2.531897314 podStartE2EDuration="5.997381343s" podCreationTimestamp="2026-02-16 22:16:31 +0000 UTC" firstStartedPulling="2026-02-16 22:16:32.924212096 +0000 UTC m=+4810.742895421" lastFinishedPulling="2026-02-16 22:16:36.389696155 +0000 UTC m=+4814.208379450" observedRunningTime="2026-02-16 22:16:36.986400099 +0000 UTC m=+4814.805083394" watchObservedRunningTime="2026-02-16 22:16:36.997381343 +0000 UTC m=+4814.816064638" Feb 16 22:16:41 crc kubenswrapper[4805]: I0216 22:16:41.530543 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:41 crc kubenswrapper[4805]: I0216 22:16:41.531238 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:41 crc kubenswrapper[4805]: I0216 22:16:41.586430 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:42 crc kubenswrapper[4805]: I0216 22:16:42.082162 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:42 crc kubenswrapper[4805]: I0216 22:16:42.139090 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69gmg"] Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.046010 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-69gmg" podUID="46065cb0-1e01-42af-bf88-0899a9581e69" containerName="registry-server" containerID="cri-o://b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28" gracePeriod=2 Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.539835 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.692356 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-utilities\") pod \"46065cb0-1e01-42af-bf88-0899a9581e69\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.692595 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-catalog-content\") pod \"46065cb0-1e01-42af-bf88-0899a9581e69\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.692632 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnhtk\" (UniqueName: \"kubernetes.io/projected/46065cb0-1e01-42af-bf88-0899a9581e69-kube-api-access-tnhtk\") pod \"46065cb0-1e01-42af-bf88-0899a9581e69\" (UID: \"46065cb0-1e01-42af-bf88-0899a9581e69\") " Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.693275 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-utilities" (OuterVolumeSpecName: "utilities") pod "46065cb0-1e01-42af-bf88-0899a9581e69" (UID: "46065cb0-1e01-42af-bf88-0899a9581e69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.694889 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.698993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46065cb0-1e01-42af-bf88-0899a9581e69-kube-api-access-tnhtk" (OuterVolumeSpecName: "kube-api-access-tnhtk") pod "46065cb0-1e01-42af-bf88-0899a9581e69" (UID: "46065cb0-1e01-42af-bf88-0899a9581e69"). InnerVolumeSpecName "kube-api-access-tnhtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.743857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46065cb0-1e01-42af-bf88-0899a9581e69" (UID: "46065cb0-1e01-42af-bf88-0899a9581e69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.797966 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46065cb0-1e01-42af-bf88-0899a9581e69-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:16:44 crc kubenswrapper[4805]: I0216 22:16:44.797996 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnhtk\" (UniqueName: \"kubernetes.io/projected/46065cb0-1e01-42af-bf88-0899a9581e69-kube-api-access-tnhtk\") on node \"crc\" DevicePath \"\"" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.064072 4805 generic.go:334] "Generic (PLEG): container finished" podID="46065cb0-1e01-42af-bf88-0899a9581e69" containerID="b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28" exitCode=0 Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.064236 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69gmg" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.064269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69gmg" event={"ID":"46065cb0-1e01-42af-bf88-0899a9581e69","Type":"ContainerDied","Data":"b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28"} Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.064995 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69gmg" event={"ID":"46065cb0-1e01-42af-bf88-0899a9581e69","Type":"ContainerDied","Data":"17a99291be9e79cb1f407edb9e2eeac584648c5aabff4de91a23282d635c4f15"} Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.065040 4805 scope.go:117] "RemoveContainer" containerID="b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.104517 4805 scope.go:117] "RemoveContainer" containerID="3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.126169 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69gmg"] Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.137123 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-69gmg"] Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.142455 4805 scope.go:117] "RemoveContainer" containerID="3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.186776 4805 scope.go:117] "RemoveContainer" containerID="b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28" Feb 16 22:16:45 crc kubenswrapper[4805]: E0216 22:16:45.187195 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28\": container with ID starting with b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28 not found: ID does not exist" containerID="b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.187236 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28"} err="failed to get container status \"b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28\": rpc error: code = NotFound desc = could not find container \"b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28\": container with ID starting with b209bf7c5a9b7f34a6cce32d565340f533f24967201965eb4a8d638cd8d59f28 not found: ID does not exist" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.187261 4805 scope.go:117] "RemoveContainer" containerID="3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4" Feb 16 22:16:45 crc kubenswrapper[4805]: E0216 22:16:45.187676 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4\": container with ID starting with 3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4 not found: ID does not exist" containerID="3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.187706 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4"} err="failed to get container status \"3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4\": rpc error: code = NotFound desc = could not find container \"3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4\": container with ID starting with 3770c4e4a6a80ac3a5537b92cdda62034786ef6ecc01bc6211048837e52907b4 not found: ID does not exist" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.187751 4805 scope.go:117] "RemoveContainer" containerID="3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec" Feb 16 22:16:45 crc kubenswrapper[4805]: E0216 22:16:45.188051 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec\": container with ID starting with 3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec not found: ID does not exist" containerID="3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.188082 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec"} err="failed to get container status \"3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec\": rpc error: code = NotFound desc = could not find container \"3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec\": container with ID starting with 3f972237ec8141cde0020b0adbb62137cff592a4806d85b503fbd880d0634bec not found: ID does not exist" Feb 16 22:16:45 crc kubenswrapper[4805]: E0216 22:16:45.599427 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:16:45 crc kubenswrapper[4805]: I0216 22:16:45.616026 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46065cb0-1e01-42af-bf88-0899a9581e69" path="/var/lib/kubelet/pods/46065cb0-1e01-42af-bf88-0899a9581e69/volumes" Feb 16 22:16:47 crc kubenswrapper[4805]: E0216 22:16:47.600404 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:16:56 crc kubenswrapper[4805]: E0216 22:16:56.603214 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:17:01 crc kubenswrapper[4805]: E0216 22:17:01.602864 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:17:10 crc kubenswrapper[4805]: E0216 22:17:10.601376 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:17:13 crc kubenswrapper[4805]: I0216 22:17:13.609822 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:17:13 crc kubenswrapper[4805]: E0216 22:17:13.714378 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:17:13 crc kubenswrapper[4805]: E0216 22:17:13.714439 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:17:13 crc kubenswrapper[4805]: E0216 22:17:13.714559 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:17:13 crc kubenswrapper[4805]: E0216 22:17:13.715786 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:17:24 crc kubenswrapper[4805]: E0216 22:17:24.601751 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:17:25 crc kubenswrapper[4805]: E0216 22:17:25.619306 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:17:36 crc kubenswrapper[4805]: E0216 22:17:36.736912 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:17:36 crc kubenswrapper[4805]: E0216 22:17:36.737640 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:17:36 crc kubenswrapper[4805]: E0216 22:17:36.737895 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:17:36 crc kubenswrapper[4805]: E0216 22:17:36.739160 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:17:38 crc kubenswrapper[4805]: I0216 22:17:38.099867 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:17:38 crc kubenswrapper[4805]: I0216 22:17:38.100224 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:17:39 crc kubenswrapper[4805]: E0216 22:17:39.599080 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:17:49 crc kubenswrapper[4805]: E0216 22:17:49.601883 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:17:52 crc kubenswrapper[4805]: E0216 22:17:52.600287 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:18:00 crc kubenswrapper[4805]: E0216 22:18:00.602111 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:18:07 crc kubenswrapper[4805]: E0216 22:18:07.599832 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:18:08 crc kubenswrapper[4805]: I0216 22:18:08.099385 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:18:08 crc kubenswrapper[4805]: I0216 22:18:08.099687 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:18:15 crc kubenswrapper[4805]: E0216 22:18:15.601972 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:18:20 crc kubenswrapper[4805]: E0216 22:18:20.600909 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:18:27 crc kubenswrapper[4805]: E0216 22:18:27.600416 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:18:35 crc kubenswrapper[4805]: E0216 22:18:35.601883 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:18:38 crc kubenswrapper[4805]: I0216 22:18:38.099449 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:18:38 crc kubenswrapper[4805]: I0216 22:18:38.099796 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:18:38 crc kubenswrapper[4805]: I0216 22:18:38.099837 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 22:18:38 crc kubenswrapper[4805]: I0216 22:18:38.100524 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:18:38 crc kubenswrapper[4805]: I0216 22:18:38.100574 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" gracePeriod=600 Feb 16 22:18:38 crc kubenswrapper[4805]: E0216 22:18:38.226952 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:18:38 crc kubenswrapper[4805]: I0216 22:18:38.348602 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" exitCode=0 Feb 16 22:18:38 crc kubenswrapper[4805]: I0216 22:18:38.348647 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45"} Feb 16 22:18:38 crc kubenswrapper[4805]: I0216 22:18:38.348679 4805 scope.go:117] "RemoveContainer" containerID="f87facc416a37b89d559829757c6df4025c9840a8cbd0b6efa4bdf4f3e6d1208" Feb 16 22:18:38 crc kubenswrapper[4805]: I0216 22:18:38.349586 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:18:38 crc kubenswrapper[4805]: E0216 22:18:38.349939 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:18:39 crc kubenswrapper[4805]: E0216 22:18:39.600979 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:18:48 crc kubenswrapper[4805]: I0216 22:18:48.598304 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:18:48 crc kubenswrapper[4805]: E0216 22:18:48.599262 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:18:48 crc kubenswrapper[4805]: E0216 22:18:48.601629 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:18:52 crc kubenswrapper[4805]: E0216 22:18:52.600697 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:19:00 crc kubenswrapper[4805]: E0216 22:19:00.617775 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:19:01 crc kubenswrapper[4805]: I0216 22:19:01.599525 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:19:01 crc kubenswrapper[4805]: E0216 22:19:01.600249 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:19:04 crc kubenswrapper[4805]: E0216 22:19:04.601339 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:19:15 crc kubenswrapper[4805]: E0216 22:19:15.600830 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:19:16 crc kubenswrapper[4805]: I0216 22:19:16.598611 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:19:16 crc kubenswrapper[4805]: E0216 22:19:16.599076 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:19:19 crc kubenswrapper[4805]: E0216 22:19:19.601365 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:19:26 crc kubenswrapper[4805]: E0216 22:19:26.600860 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:19:29 crc kubenswrapper[4805]: I0216 22:19:29.597816 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:19:29 crc kubenswrapper[4805]: E0216 22:19:29.598371 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:19:34 crc kubenswrapper[4805]: E0216 22:19:34.601762 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:19:40 crc kubenswrapper[4805]: E0216 22:19:40.601528 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:19:41 crc kubenswrapper[4805]: I0216 22:19:41.598211 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:19:41 crc kubenswrapper[4805]: E0216 22:19:41.598739 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:19:47 crc kubenswrapper[4805]: E0216 22:19:47.599678 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:19:53 crc kubenswrapper[4805]: I0216 22:19:53.606351 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:19:53 crc kubenswrapper[4805]: E0216 22:19:53.608291 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:19:53 crc kubenswrapper[4805]: E0216 22:19:53.610265 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:19:58 crc kubenswrapper[4805]: E0216 22:19:58.600031 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:20:04 crc kubenswrapper[4805]: E0216 22:20:04.600490 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:20:05 crc kubenswrapper[4805]: I0216 22:20:05.598442 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:20:05 crc kubenswrapper[4805]: E0216 22:20:05.599019 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:20:08 crc kubenswrapper[4805]: I0216 22:20:08.365184 4805 generic.go:334] "Generic (PLEG): container finished" podID="712b6325-4e7e-4557-ba00-fdab4a8e3f79" containerID="7bcc71e0167cadbfdb27b0a317cfb613764d5e8c65d2f26ab6068e06a2df3f84" exitCode=2 Feb 16 22:20:08 crc kubenswrapper[4805]: I0216 22:20:08.365275 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" event={"ID":"712b6325-4e7e-4557-ba00-fdab4a8e3f79","Type":"ContainerDied","Data":"7bcc71e0167cadbfdb27b0a317cfb613764d5e8c65d2f26ab6068e06a2df3f84"} Feb 16 22:20:09 crc kubenswrapper[4805]: I0216 22:20:09.878228 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:20:09 crc kubenswrapper[4805]: I0216 22:20:09.906449 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-inventory\") pod \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " Feb 16 22:20:09 crc kubenswrapper[4805]: I0216 22:20:09.906606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98gv4\" (UniqueName: \"kubernetes.io/projected/712b6325-4e7e-4557-ba00-fdab4a8e3f79-kube-api-access-98gv4\") pod \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " Feb 16 22:20:09 crc kubenswrapper[4805]: I0216 22:20:09.906727 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-ssh-key-openstack-edpm-ipam\") pod \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\" (UID: \"712b6325-4e7e-4557-ba00-fdab4a8e3f79\") " Feb 16 22:20:09 crc kubenswrapper[4805]: I0216 22:20:09.927075 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712b6325-4e7e-4557-ba00-fdab4a8e3f79-kube-api-access-98gv4" (OuterVolumeSpecName: "kube-api-access-98gv4") pod "712b6325-4e7e-4557-ba00-fdab4a8e3f79" (UID: "712b6325-4e7e-4557-ba00-fdab4a8e3f79"). InnerVolumeSpecName "kube-api-access-98gv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:20:09 crc kubenswrapper[4805]: I0216 22:20:09.941251 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "712b6325-4e7e-4557-ba00-fdab4a8e3f79" (UID: "712b6325-4e7e-4557-ba00-fdab4a8e3f79"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:20:09 crc kubenswrapper[4805]: I0216 22:20:09.963259 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-inventory" (OuterVolumeSpecName: "inventory") pod "712b6325-4e7e-4557-ba00-fdab4a8e3f79" (UID: "712b6325-4e7e-4557-ba00-fdab4a8e3f79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:20:10 crc kubenswrapper[4805]: I0216 22:20:10.008894 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:20:10 crc kubenswrapper[4805]: I0216 22:20:10.008933 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98gv4\" (UniqueName: \"kubernetes.io/projected/712b6325-4e7e-4557-ba00-fdab4a8e3f79-kube-api-access-98gv4\") on node \"crc\" DevicePath \"\"" Feb 16 22:20:10 crc kubenswrapper[4805]: I0216 22:20:10.008946 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/712b6325-4e7e-4557-ba00-fdab4a8e3f79-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:20:10 crc kubenswrapper[4805]: I0216 22:20:10.450973 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" event={"ID":"712b6325-4e7e-4557-ba00-fdab4a8e3f79","Type":"ContainerDied","Data":"8511a497129d52890ee74c07e06cf90837e15d6f1b58811d5d6278040b577256"} Feb 16 22:20:10 crc kubenswrapper[4805]: I0216 22:20:10.451802 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8511a497129d52890ee74c07e06cf90837e15d6f1b58811d5d6278040b577256" Feb 16 22:20:10 crc kubenswrapper[4805]: I0216 22:20:10.452711 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8msrm" Feb 16 22:20:10 crc kubenswrapper[4805]: E0216 22:20:10.601136 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:20:16 crc kubenswrapper[4805]: E0216 22:20:16.600411 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:20:20 crc kubenswrapper[4805]: I0216 22:20:20.597866 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:20:20 crc kubenswrapper[4805]: E0216 22:20:20.598668 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:20:22 crc kubenswrapper[4805]: E0216 22:20:22.600774 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:20:27 crc kubenswrapper[4805]: E0216 22:20:27.599966 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:20:31 crc kubenswrapper[4805]: I0216 22:20:31.598759 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:20:31 crc kubenswrapper[4805]: E0216 22:20:31.600190 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:20:34 crc kubenswrapper[4805]: E0216 22:20:34.600431 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:20:42 crc kubenswrapper[4805]: E0216 22:20:42.601120 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:20:44 crc kubenswrapper[4805]: I0216 22:20:44.598067 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:20:44 crc kubenswrapper[4805]: E0216 22:20:44.598681 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:20:49 crc kubenswrapper[4805]: E0216 22:20:49.601530 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:20:57 crc kubenswrapper[4805]: E0216 22:20:57.603951 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:20:58 crc kubenswrapper[4805]: I0216 22:20:58.598318 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:20:58 crc kubenswrapper[4805]: E0216 22:20:58.598618 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:21:02 crc kubenswrapper[4805]: E0216 22:21:02.600579 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:21:10 crc kubenswrapper[4805]: E0216 22:21:10.601291 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:21:11 crc kubenswrapper[4805]: I0216 22:21:11.599181 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:21:11 crc kubenswrapper[4805]: E0216 22:21:11.600111 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:21:16 crc kubenswrapper[4805]: E0216 22:21:16.601072 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:21:22 crc kubenswrapper[4805]: I0216 22:21:22.598505 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:21:22 crc kubenswrapper[4805]: E0216 22:21:22.599736 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:21:22 crc kubenswrapper[4805]: E0216 22:21:22.603206 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.664845 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p7xph/must-gather-mh9w9"] Feb 16 22:21:28 crc kubenswrapper[4805]: E0216 22:21:28.665587 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46065cb0-1e01-42af-bf88-0899a9581e69" containerName="extract-utilities" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.665600 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="46065cb0-1e01-42af-bf88-0899a9581e69" containerName="extract-utilities" Feb 16 22:21:28 crc kubenswrapper[4805]: E0216 22:21:28.665629 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712b6325-4e7e-4557-ba00-fdab4a8e3f79" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.665636 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="712b6325-4e7e-4557-ba00-fdab4a8e3f79" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:21:28 crc kubenswrapper[4805]: E0216 22:21:28.665657 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46065cb0-1e01-42af-bf88-0899a9581e69" containerName="registry-server" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.665664 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="46065cb0-1e01-42af-bf88-0899a9581e69" containerName="registry-server" Feb 16 22:21:28 crc kubenswrapper[4805]: E0216 22:21:28.665680 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46065cb0-1e01-42af-bf88-0899a9581e69" containerName="extract-content" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.665689 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="46065cb0-1e01-42af-bf88-0899a9581e69" containerName="extract-content" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.665961 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="46065cb0-1e01-42af-bf88-0899a9581e69" containerName="registry-server" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.665991 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="712b6325-4e7e-4557-ba00-fdab4a8e3f79" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.667474 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.669592 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p7xph"/"openshift-service-ca.crt" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.669918 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p7xph"/"kube-root-ca.crt" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.675119 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-p7xph"/"default-dockercfg-rns45" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.683456 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p7xph/must-gather-mh9w9"] Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.739652 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82xp6\" (UniqueName: \"kubernetes.io/projected/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-kube-api-access-82xp6\") pod \"must-gather-mh9w9\" (UID: \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\") " pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.739703 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-must-gather-output\") pod \"must-gather-mh9w9\" (UID: \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\") " pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.841922 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82xp6\" (UniqueName: \"kubernetes.io/projected/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-kube-api-access-82xp6\") pod \"must-gather-mh9w9\" (UID: \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\") " pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.841969 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-must-gather-output\") pod \"must-gather-mh9w9\" (UID: \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\") " pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.842592 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-must-gather-output\") pod \"must-gather-mh9w9\" (UID: \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\") " pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.880144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82xp6\" (UniqueName: \"kubernetes.io/projected/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-kube-api-access-82xp6\") pod \"must-gather-mh9w9\" (UID: \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\") " pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:21:28 crc kubenswrapper[4805]: I0216 22:21:28.993116 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:21:29 crc kubenswrapper[4805]: I0216 22:21:29.469141 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p7xph/must-gather-mh9w9"] Feb 16 22:21:29 crc kubenswrapper[4805]: I0216 22:21:29.578964 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p7xph/must-gather-mh9w9" event={"ID":"4ec087dc-4c20-4c0d-893d-f0ccaf92477e","Type":"ContainerStarted","Data":"47038ba8354c4b4addb8c61cf6326d3d77c15318075c9d41c5a30ff8a1d2bce5"} Feb 16 22:21:31 crc kubenswrapper[4805]: E0216 22:21:31.602560 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:21:35 crc kubenswrapper[4805]: E0216 22:21:35.603745 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:21:37 crc kubenswrapper[4805]: I0216 22:21:37.598683 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:21:37 crc kubenswrapper[4805]: E0216 22:21:37.603452 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:21:38 crc kubenswrapper[4805]: I0216 22:21:38.693056 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p7xph/must-gather-mh9w9" event={"ID":"4ec087dc-4c20-4c0d-893d-f0ccaf92477e","Type":"ContainerStarted","Data":"522a99826e0337f36cd4123134f69f92ae876458114fdffde9bf3ccaeb377b91"} Feb 16 22:21:38 crc kubenswrapper[4805]: I0216 22:21:38.693395 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p7xph/must-gather-mh9w9" event={"ID":"4ec087dc-4c20-4c0d-893d-f0ccaf92477e","Type":"ContainerStarted","Data":"406677f5705d8a29370ef1b1387d601091bdfbb2566cd08face4e8aa2a19372b"} Feb 16 22:21:38 crc kubenswrapper[4805]: I0216 22:21:38.710482 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p7xph/must-gather-mh9w9" podStartSLOduration=2.6258433820000002 podStartE2EDuration="10.710462491s" podCreationTimestamp="2026-02-16 22:21:28 +0000 UTC" firstStartedPulling="2026-02-16 22:21:29.472123521 +0000 UTC m=+5107.290806816" lastFinishedPulling="2026-02-16 22:21:37.55674263 +0000 UTC m=+5115.375425925" observedRunningTime="2026-02-16 22:21:38.70633941 +0000 UTC m=+5116.525022715" watchObservedRunningTime="2026-02-16 22:21:38.710462491 +0000 UTC m=+5116.529145786" Feb 16 22:21:42 crc kubenswrapper[4805]: I0216 22:21:42.765112 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p7xph/crc-debug-8b56q"] Feb 16 22:21:42 crc kubenswrapper[4805]: I0216 22:21:42.768017 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:21:42 crc kubenswrapper[4805]: I0216 22:21:42.826150 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4zwp\" (UniqueName: \"kubernetes.io/projected/aaaa54b9-cc64-4980-9614-952e54c6f8ad-kube-api-access-t4zwp\") pod \"crc-debug-8b56q\" (UID: \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\") " pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:21:42 crc kubenswrapper[4805]: I0216 22:21:42.826395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aaaa54b9-cc64-4980-9614-952e54c6f8ad-host\") pod \"crc-debug-8b56q\" (UID: \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\") " pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:21:42 crc kubenswrapper[4805]: I0216 22:21:42.928959 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aaaa54b9-cc64-4980-9614-952e54c6f8ad-host\") pod \"crc-debug-8b56q\" (UID: \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\") " pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:21:42 crc kubenswrapper[4805]: I0216 22:21:42.929072 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4zwp\" (UniqueName: \"kubernetes.io/projected/aaaa54b9-cc64-4980-9614-952e54c6f8ad-kube-api-access-t4zwp\") pod \"crc-debug-8b56q\" (UID: \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\") " pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:21:42 crc kubenswrapper[4805]: I0216 22:21:42.929099 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aaaa54b9-cc64-4980-9614-952e54c6f8ad-host\") pod \"crc-debug-8b56q\" (UID: \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\") " pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:21:42 crc kubenswrapper[4805]: I0216 22:21:42.950812 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4zwp\" (UniqueName: \"kubernetes.io/projected/aaaa54b9-cc64-4980-9614-952e54c6f8ad-kube-api-access-t4zwp\") pod \"crc-debug-8b56q\" (UID: \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\") " pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:21:43 crc kubenswrapper[4805]: I0216 22:21:43.095334 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:21:43 crc kubenswrapper[4805]: I0216 22:21:43.749995 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p7xph/crc-debug-8b56q" event={"ID":"aaaa54b9-cc64-4980-9614-952e54c6f8ad","Type":"ContainerStarted","Data":"8ced91576fb18225ec8a56636835a75d25ee0b876ec6d0475cb6106acde020a8"} Feb 16 22:21:46 crc kubenswrapper[4805]: E0216 22:21:46.601299 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:21:49 crc kubenswrapper[4805]: E0216 22:21:49.602357 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:21:50 crc kubenswrapper[4805]: I0216 22:21:50.599027 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:21:50 crc kubenswrapper[4805]: E0216 22:21:50.599545 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:21:56 crc kubenswrapper[4805]: I0216 22:21:56.907043 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p7xph/crc-debug-8b56q" event={"ID":"aaaa54b9-cc64-4980-9614-952e54c6f8ad","Type":"ContainerStarted","Data":"624f3040a0d5fe111a821e776efe4dd8b15c42b897190207fac7bc1588ddab01"} Feb 16 22:21:56 crc kubenswrapper[4805]: I0216 22:21:56.931801 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p7xph/crc-debug-8b56q" podStartSLOduration=2.088234294 podStartE2EDuration="14.93177544s" podCreationTimestamp="2026-02-16 22:21:42 +0000 UTC" firstStartedPulling="2026-02-16 22:21:43.145783139 +0000 UTC m=+5120.964466434" lastFinishedPulling="2026-02-16 22:21:55.989324285 +0000 UTC m=+5133.808007580" observedRunningTime="2026-02-16 22:21:56.927130535 +0000 UTC m=+5134.745813830" watchObservedRunningTime="2026-02-16 22:21:56.93177544 +0000 UTC m=+5134.750458745" Feb 16 22:22:00 crc kubenswrapper[4805]: E0216 22:22:00.599589 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:22:01 crc kubenswrapper[4805]: E0216 22:22:01.601846 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:22:03 crc kubenswrapper[4805]: I0216 22:22:03.613391 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:22:03 crc kubenswrapper[4805]: E0216 22:22:03.614350 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:22:11 crc kubenswrapper[4805]: E0216 22:22:11.602185 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:22:14 crc kubenswrapper[4805]: I0216 22:22:14.115621 4805 generic.go:334] "Generic (PLEG): container finished" podID="aaaa54b9-cc64-4980-9614-952e54c6f8ad" containerID="624f3040a0d5fe111a821e776efe4dd8b15c42b897190207fac7bc1588ddab01" exitCode=0 Feb 16 22:22:14 crc kubenswrapper[4805]: I0216 22:22:14.115708 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p7xph/crc-debug-8b56q" event={"ID":"aaaa54b9-cc64-4980-9614-952e54c6f8ad","Type":"ContainerDied","Data":"624f3040a0d5fe111a821e776efe4dd8b15c42b897190207fac7bc1588ddab01"} Feb 16 22:22:15 crc kubenswrapper[4805]: I0216 22:22:15.249540 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:22:15 crc kubenswrapper[4805]: I0216 22:22:15.292405 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p7xph/crc-debug-8b56q"] Feb 16 22:22:15 crc kubenswrapper[4805]: I0216 22:22:15.305133 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p7xph/crc-debug-8b56q"] Feb 16 22:22:15 crc kubenswrapper[4805]: I0216 22:22:15.344383 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aaaa54b9-cc64-4980-9614-952e54c6f8ad-host\") pod \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\" (UID: \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\") " Feb 16 22:22:15 crc kubenswrapper[4805]: I0216 22:22:15.344739 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaaa54b9-cc64-4980-9614-952e54c6f8ad-host" (OuterVolumeSpecName: "host") pod "aaaa54b9-cc64-4980-9614-952e54c6f8ad" (UID: "aaaa54b9-cc64-4980-9614-952e54c6f8ad"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:22:15 crc kubenswrapper[4805]: I0216 22:22:15.344761 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4zwp\" (UniqueName: \"kubernetes.io/projected/aaaa54b9-cc64-4980-9614-952e54c6f8ad-kube-api-access-t4zwp\") pod \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\" (UID: \"aaaa54b9-cc64-4980-9614-952e54c6f8ad\") " Feb 16 22:22:15 crc kubenswrapper[4805]: I0216 22:22:15.345614 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aaaa54b9-cc64-4980-9614-952e54c6f8ad-host\") on node \"crc\" DevicePath \"\"" Feb 16 22:22:15 crc kubenswrapper[4805]: E0216 22:22:15.600081 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:22:15 crc kubenswrapper[4805]: I0216 22:22:15.904851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaaa54b9-cc64-4980-9614-952e54c6f8ad-kube-api-access-t4zwp" (OuterVolumeSpecName: "kube-api-access-t4zwp") pod "aaaa54b9-cc64-4980-9614-952e54c6f8ad" (UID: "aaaa54b9-cc64-4980-9614-952e54c6f8ad"). InnerVolumeSpecName "kube-api-access-t4zwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:22:15 crc kubenswrapper[4805]: I0216 22:22:15.960516 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4zwp\" (UniqueName: \"kubernetes.io/projected/aaaa54b9-cc64-4980-9614-952e54c6f8ad-kube-api-access-t4zwp\") on node \"crc\" DevicePath \"\"" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.144938 4805 scope.go:117] "RemoveContainer" containerID="624f3040a0d5fe111a821e776efe4dd8b15c42b897190207fac7bc1588ddab01" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.144967 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/crc-debug-8b56q" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.548808 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p7xph/crc-debug-tmd9x"] Feb 16 22:22:16 crc kubenswrapper[4805]: E0216 22:22:16.549671 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaaa54b9-cc64-4980-9614-952e54c6f8ad" containerName="container-00" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.549688 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaaa54b9-cc64-4980-9614-952e54c6f8ad" containerName="container-00" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.549952 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaaa54b9-cc64-4980-9614-952e54c6f8ad" containerName="container-00" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.550902 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.678959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cnb\" (UniqueName: \"kubernetes.io/projected/41740fae-bcef-419a-b8d9-0c529116b8ae-kube-api-access-66cnb\") pod \"crc-debug-tmd9x\" (UID: \"41740fae-bcef-419a-b8d9-0c529116b8ae\") " pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.679150 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41740fae-bcef-419a-b8d9-0c529116b8ae-host\") pod \"crc-debug-tmd9x\" (UID: \"41740fae-bcef-419a-b8d9-0c529116b8ae\") " pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.781664 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41740fae-bcef-419a-b8d9-0c529116b8ae-host\") pod \"crc-debug-tmd9x\" (UID: \"41740fae-bcef-419a-b8d9-0c529116b8ae\") " pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.781837 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41740fae-bcef-419a-b8d9-0c529116b8ae-host\") pod \"crc-debug-tmd9x\" (UID: \"41740fae-bcef-419a-b8d9-0c529116b8ae\") " pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.782008 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66cnb\" (UniqueName: \"kubernetes.io/projected/41740fae-bcef-419a-b8d9-0c529116b8ae-kube-api-access-66cnb\") pod \"crc-debug-tmd9x\" (UID: \"41740fae-bcef-419a-b8d9-0c529116b8ae\") " pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.799973 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66cnb\" (UniqueName: \"kubernetes.io/projected/41740fae-bcef-419a-b8d9-0c529116b8ae-kube-api-access-66cnb\") pod \"crc-debug-tmd9x\" (UID: \"41740fae-bcef-419a-b8d9-0c529116b8ae\") " pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:16 crc kubenswrapper[4805]: I0216 22:22:16.870948 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:17 crc kubenswrapper[4805]: I0216 22:22:17.158890 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p7xph/crc-debug-tmd9x" event={"ID":"41740fae-bcef-419a-b8d9-0c529116b8ae","Type":"ContainerStarted","Data":"4eb707d84c4d93721097f0b0134c4ae8bf420b09ea67493e78ac74563e18826f"} Feb 16 22:22:17 crc kubenswrapper[4805]: I0216 22:22:17.611791 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaaa54b9-cc64-4980-9614-952e54c6f8ad" path="/var/lib/kubelet/pods/aaaa54b9-cc64-4980-9614-952e54c6f8ad/volumes" Feb 16 22:22:18 crc kubenswrapper[4805]: I0216 22:22:18.171343 4805 generic.go:334] "Generic (PLEG): container finished" podID="41740fae-bcef-419a-b8d9-0c529116b8ae" containerID="fc60046c8a23757852bf483cf056f004f80b5ebcec3745deb565d5881f79f484" exitCode=1 Feb 16 22:22:18 crc kubenswrapper[4805]: I0216 22:22:18.171398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p7xph/crc-debug-tmd9x" event={"ID":"41740fae-bcef-419a-b8d9-0c529116b8ae","Type":"ContainerDied","Data":"fc60046c8a23757852bf483cf056f004f80b5ebcec3745deb565d5881f79f484"} Feb 16 22:22:18 crc kubenswrapper[4805]: I0216 22:22:18.211075 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p7xph/crc-debug-tmd9x"] Feb 16 22:22:18 crc kubenswrapper[4805]: I0216 22:22:18.224883 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p7xph/crc-debug-tmd9x"] Feb 16 22:22:18 crc kubenswrapper[4805]: I0216 22:22:18.598417 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:22:18 crc kubenswrapper[4805]: E0216 22:22:18.598916 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:22:19 crc kubenswrapper[4805]: I0216 22:22:19.340233 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:19 crc kubenswrapper[4805]: I0216 22:22:19.443062 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41740fae-bcef-419a-b8d9-0c529116b8ae-host\") pod \"41740fae-bcef-419a-b8d9-0c529116b8ae\" (UID: \"41740fae-bcef-419a-b8d9-0c529116b8ae\") " Feb 16 22:22:19 crc kubenswrapper[4805]: I0216 22:22:19.443169 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41740fae-bcef-419a-b8d9-0c529116b8ae-host" (OuterVolumeSpecName: "host") pod "41740fae-bcef-419a-b8d9-0c529116b8ae" (UID: "41740fae-bcef-419a-b8d9-0c529116b8ae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:22:19 crc kubenswrapper[4805]: I0216 22:22:19.443251 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66cnb\" (UniqueName: \"kubernetes.io/projected/41740fae-bcef-419a-b8d9-0c529116b8ae-kube-api-access-66cnb\") pod \"41740fae-bcef-419a-b8d9-0c529116b8ae\" (UID: \"41740fae-bcef-419a-b8d9-0c529116b8ae\") " Feb 16 22:22:19 crc kubenswrapper[4805]: I0216 22:22:19.444070 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41740fae-bcef-419a-b8d9-0c529116b8ae-host\") on node \"crc\" DevicePath \"\"" Feb 16 22:22:19 crc kubenswrapper[4805]: I0216 22:22:19.449422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41740fae-bcef-419a-b8d9-0c529116b8ae-kube-api-access-66cnb" (OuterVolumeSpecName: "kube-api-access-66cnb") pod "41740fae-bcef-419a-b8d9-0c529116b8ae" (UID: "41740fae-bcef-419a-b8d9-0c529116b8ae"). InnerVolumeSpecName "kube-api-access-66cnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:22:19 crc kubenswrapper[4805]: I0216 22:22:19.546983 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66cnb\" (UniqueName: \"kubernetes.io/projected/41740fae-bcef-419a-b8d9-0c529116b8ae-kube-api-access-66cnb\") on node \"crc\" DevicePath \"\"" Feb 16 22:22:19 crc kubenswrapper[4805]: I0216 22:22:19.610052 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41740fae-bcef-419a-b8d9-0c529116b8ae" path="/var/lib/kubelet/pods/41740fae-bcef-419a-b8d9-0c529116b8ae/volumes" Feb 16 22:22:20 crc kubenswrapper[4805]: I0216 22:22:20.190455 4805 scope.go:117] "RemoveContainer" containerID="fc60046c8a23757852bf483cf056f004f80b5ebcec3745deb565d5881f79f484" Feb 16 22:22:20 crc kubenswrapper[4805]: I0216 22:22:20.190815 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/crc-debug-tmd9x" Feb 16 22:22:22 crc kubenswrapper[4805]: I0216 22:22:22.600154 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:22:22 crc kubenswrapper[4805]: E0216 22:22:22.735488 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:22:22 crc kubenswrapper[4805]: E0216 22:22:22.735556 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:22:22 crc kubenswrapper[4805]: E0216 22:22:22.735712 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:22:22 crc kubenswrapper[4805]: E0216 22:22:22.736936 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:22:29 crc kubenswrapper[4805]: E0216 22:22:29.603287 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.368744 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sptdc"] Feb 16 22:22:31 crc kubenswrapper[4805]: E0216 22:22:31.369549 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41740fae-bcef-419a-b8d9-0c529116b8ae" containerName="container-00" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.369560 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="41740fae-bcef-419a-b8d9-0c529116b8ae" containerName="container-00" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.369810 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="41740fae-bcef-419a-b8d9-0c529116b8ae" containerName="container-00" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.371587 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.383225 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sptdc"] Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.529855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6cgh\" (UniqueName: \"kubernetes.io/projected/b558259c-1dad-488e-84ea-173c4d64846f-kube-api-access-q6cgh\") pod \"certified-operators-sptdc\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.529915 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-catalog-content\") pod \"certified-operators-sptdc\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.530470 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-utilities\") pod \"certified-operators-sptdc\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.633052 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-utilities\") pod \"certified-operators-sptdc\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.633282 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6cgh\" (UniqueName: \"kubernetes.io/projected/b558259c-1dad-488e-84ea-173c4d64846f-kube-api-access-q6cgh\") pod \"certified-operators-sptdc\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.633311 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-catalog-content\") pod \"certified-operators-sptdc\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.633615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-utilities\") pod \"certified-operators-sptdc\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:31 crc kubenswrapper[4805]: I0216 22:22:31.633827 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-catalog-content\") pod \"certified-operators-sptdc\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:32 crc kubenswrapper[4805]: I0216 22:22:32.111701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6cgh\" (UniqueName: \"kubernetes.io/projected/b558259c-1dad-488e-84ea-173c4d64846f-kube-api-access-q6cgh\") pod \"certified-operators-sptdc\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:32 crc kubenswrapper[4805]: I0216 22:22:32.301596 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:32 crc kubenswrapper[4805]: I0216 22:22:32.599088 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:22:32 crc kubenswrapper[4805]: E0216 22:22:32.599973 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:22:32 crc kubenswrapper[4805]: I0216 22:22:32.851019 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sptdc"] Feb 16 22:22:32 crc kubenswrapper[4805]: W0216 22:22:32.864320 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb558259c_1dad_488e_84ea_173c4d64846f.slice/crio-1b155d70e0e7966c54fb4c2bcdaac5ef2f49706231548ab75e55f65d95338f77 WatchSource:0}: Error finding container 1b155d70e0e7966c54fb4c2bcdaac5ef2f49706231548ab75e55f65d95338f77: Status 404 returned error can't find the container with id 1b155d70e0e7966c54fb4c2bcdaac5ef2f49706231548ab75e55f65d95338f77 Feb 16 22:22:33 crc kubenswrapper[4805]: I0216 22:22:33.322810 4805 generic.go:334] "Generic (PLEG): container finished" podID="b558259c-1dad-488e-84ea-173c4d64846f" containerID="409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0" exitCode=0 Feb 16 22:22:33 crc kubenswrapper[4805]: I0216 22:22:33.322928 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sptdc" event={"ID":"b558259c-1dad-488e-84ea-173c4d64846f","Type":"ContainerDied","Data":"409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0"} Feb 16 22:22:33 crc kubenswrapper[4805]: I0216 22:22:33.323179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sptdc" event={"ID":"b558259c-1dad-488e-84ea-173c4d64846f","Type":"ContainerStarted","Data":"1b155d70e0e7966c54fb4c2bcdaac5ef2f49706231548ab75e55f65d95338f77"} Feb 16 22:22:35 crc kubenswrapper[4805]: I0216 22:22:35.346472 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sptdc" event={"ID":"b558259c-1dad-488e-84ea-173c4d64846f","Type":"ContainerStarted","Data":"08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e"} Feb 16 22:22:35 crc kubenswrapper[4805]: E0216 22:22:35.602181 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:22:36 crc kubenswrapper[4805]: I0216 22:22:36.357420 4805 generic.go:334] "Generic (PLEG): container finished" podID="b558259c-1dad-488e-84ea-173c4d64846f" containerID="08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e" exitCode=0 Feb 16 22:22:36 crc kubenswrapper[4805]: I0216 22:22:36.357522 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sptdc" event={"ID":"b558259c-1dad-488e-84ea-173c4d64846f","Type":"ContainerDied","Data":"08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e"} Feb 16 22:22:37 crc kubenswrapper[4805]: I0216 22:22:37.371536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sptdc" event={"ID":"b558259c-1dad-488e-84ea-173c4d64846f","Type":"ContainerStarted","Data":"77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58"} Feb 16 22:22:37 crc kubenswrapper[4805]: I0216 22:22:37.393587 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sptdc" podStartSLOduration=2.948152119 podStartE2EDuration="6.393567649s" podCreationTimestamp="2026-02-16 22:22:31 +0000 UTC" firstStartedPulling="2026-02-16 22:22:33.325937651 +0000 UTC m=+5171.144620946" lastFinishedPulling="2026-02-16 22:22:36.771353181 +0000 UTC m=+5174.590036476" observedRunningTime="2026-02-16 22:22:37.391242496 +0000 UTC m=+5175.209925811" watchObservedRunningTime="2026-02-16 22:22:37.393567649 +0000 UTC m=+5175.212250954" Feb 16 22:22:40 crc kubenswrapper[4805]: E0216 22:22:40.731899 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:22:40 crc kubenswrapper[4805]: E0216 22:22:40.732692 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:22:40 crc kubenswrapper[4805]: E0216 22:22:40.732925 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:22:40 crc kubenswrapper[4805]: E0216 22:22:40.734895 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:22:42 crc kubenswrapper[4805]: I0216 22:22:42.302467 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:42 crc kubenswrapper[4805]: I0216 22:22:42.302889 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:43 crc kubenswrapper[4805]: I0216 22:22:43.363542 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sptdc" podUID="b558259c-1dad-488e-84ea-173c4d64846f" containerName="registry-server" probeResult="failure" output=< Feb 16 22:22:43 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 22:22:43 crc kubenswrapper[4805]: > Feb 16 22:22:43 crc kubenswrapper[4805]: I0216 22:22:43.605708 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:22:43 crc kubenswrapper[4805]: E0216 22:22:43.606436 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:22:49 crc kubenswrapper[4805]: E0216 22:22:49.599962 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:22:52 crc kubenswrapper[4805]: I0216 22:22:52.358349 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:52 crc kubenswrapper[4805]: I0216 22:22:52.420880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:52 crc kubenswrapper[4805]: I0216 22:22:52.594868 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sptdc"] Feb 16 22:22:53 crc kubenswrapper[4805]: I0216 22:22:53.566482 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sptdc" podUID="b558259c-1dad-488e-84ea-173c4d64846f" containerName="registry-server" containerID="cri-o://77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58" gracePeriod=2 Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.104579 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.268135 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-utilities\") pod \"b558259c-1dad-488e-84ea-173c4d64846f\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.268250 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6cgh\" (UniqueName: \"kubernetes.io/projected/b558259c-1dad-488e-84ea-173c4d64846f-kube-api-access-q6cgh\") pod \"b558259c-1dad-488e-84ea-173c4d64846f\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.268468 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-catalog-content\") pod \"b558259c-1dad-488e-84ea-173c4d64846f\" (UID: \"b558259c-1dad-488e-84ea-173c4d64846f\") " Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.268993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-utilities" (OuterVolumeSpecName: "utilities") pod "b558259c-1dad-488e-84ea-173c4d64846f" (UID: "b558259c-1dad-488e-84ea-173c4d64846f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.269200 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.275996 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b558259c-1dad-488e-84ea-173c4d64846f-kube-api-access-q6cgh" (OuterVolumeSpecName: "kube-api-access-q6cgh") pod "b558259c-1dad-488e-84ea-173c4d64846f" (UID: "b558259c-1dad-488e-84ea-173c4d64846f"). InnerVolumeSpecName "kube-api-access-q6cgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.327230 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b558259c-1dad-488e-84ea-173c4d64846f" (UID: "b558259c-1dad-488e-84ea-173c4d64846f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.371743 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6cgh\" (UniqueName: \"kubernetes.io/projected/b558259c-1dad-488e-84ea-173c4d64846f-kube-api-access-q6cgh\") on node \"crc\" DevicePath \"\"" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.372167 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558259c-1dad-488e-84ea-173c4d64846f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.585083 4805 generic.go:334] "Generic (PLEG): container finished" podID="b558259c-1dad-488e-84ea-173c4d64846f" containerID="77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58" exitCode=0 Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.585126 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sptdc" event={"ID":"b558259c-1dad-488e-84ea-173c4d64846f","Type":"ContainerDied","Data":"77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58"} Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.585160 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sptdc" event={"ID":"b558259c-1dad-488e-84ea-173c4d64846f","Type":"ContainerDied","Data":"1b155d70e0e7966c54fb4c2bcdaac5ef2f49706231548ab75e55f65d95338f77"} Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.585180 4805 scope.go:117] "RemoveContainer" containerID="77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.585350 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sptdc" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.620956 4805 scope.go:117] "RemoveContainer" containerID="08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e" Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.644075 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sptdc"] Feb 16 22:22:54 crc kubenswrapper[4805]: I0216 22:22:54.654517 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sptdc"] Feb 16 22:22:55 crc kubenswrapper[4805]: I0216 22:22:55.423206 4805 scope.go:117] "RemoveContainer" containerID="409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0" Feb 16 22:22:55 crc kubenswrapper[4805]: I0216 22:22:55.490350 4805 scope.go:117] "RemoveContainer" containerID="77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58" Feb 16 22:22:55 crc kubenswrapper[4805]: E0216 22:22:55.491197 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58\": container with ID starting with 77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58 not found: ID does not exist" containerID="77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58" Feb 16 22:22:55 crc kubenswrapper[4805]: I0216 22:22:55.491254 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58"} err="failed to get container status \"77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58\": rpc error: code = NotFound desc = could not find container \"77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58\": container with ID starting with 77b5a83c7127daee31c831f983704289f100bad4b9b1bee5e29a044ae55f9b58 not found: ID does not exist" Feb 16 22:22:55 crc kubenswrapper[4805]: I0216 22:22:55.491278 4805 scope.go:117] "RemoveContainer" containerID="08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e" Feb 16 22:22:55 crc kubenswrapper[4805]: E0216 22:22:55.494330 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e\": container with ID starting with 08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e not found: ID does not exist" containerID="08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e" Feb 16 22:22:55 crc kubenswrapper[4805]: I0216 22:22:55.494358 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e"} err="failed to get container status \"08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e\": rpc error: code = NotFound desc = could not find container \"08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e\": container with ID starting with 08ad024c8a4f6384fbf3d87be3cfdba580fda85c26552b14e403e37e5eead35e not found: ID does not exist" Feb 16 22:22:55 crc kubenswrapper[4805]: I0216 22:22:55.494376 4805 scope.go:117] "RemoveContainer" containerID="409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0" Feb 16 22:22:55 crc kubenswrapper[4805]: E0216 22:22:55.494632 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0\": container with ID starting with 409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0 not found: ID does not exist" containerID="409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0" Feb 16 22:22:55 crc kubenswrapper[4805]: I0216 22:22:55.494667 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0"} err="failed to get container status \"409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0\": rpc error: code = NotFound desc = could not find container \"409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0\": container with ID starting with 409dc80b3bacfedd353281ab24ee987bb60639e2e03570f7a524f56e3d0725c0 not found: ID does not exist" Feb 16 22:22:55 crc kubenswrapper[4805]: E0216 22:22:55.601813 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:22:55 crc kubenswrapper[4805]: I0216 22:22:55.612373 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b558259c-1dad-488e-84ea-173c4d64846f" path="/var/lib/kubelet/pods/b558259c-1dad-488e-84ea-173c4d64846f/volumes" Feb 16 22:22:58 crc kubenswrapper[4805]: I0216 22:22:58.613585 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:22:58 crc kubenswrapper[4805]: E0216 22:22:58.622026 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:23:02 crc kubenswrapper[4805]: E0216 22:23:02.599894 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:23:10 crc kubenswrapper[4805]: I0216 22:23:10.598101 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:23:10 crc kubenswrapper[4805]: E0216 22:23:10.599796 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:23:10 crc kubenswrapper[4805]: E0216 22:23:10.600011 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:23:14 crc kubenswrapper[4805]: E0216 22:23:14.600024 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:23:22 crc kubenswrapper[4805]: I0216 22:23:22.598543 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:23:22 crc kubenswrapper[4805]: E0216 22:23:22.599616 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:23:25 crc kubenswrapper[4805]: E0216 22:23:25.601888 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:23:27 crc kubenswrapper[4805]: E0216 22:23:27.601706 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:23:32 crc kubenswrapper[4805]: I0216 22:23:32.788053 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e7b0acc2-1c23-4182-85ca-3ab0293b64a0/aodh-listener/0.log" Feb 16 22:23:32 crc kubenswrapper[4805]: I0216 22:23:32.823443 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e7b0acc2-1c23-4182-85ca-3ab0293b64a0/aodh-api/0.log" Feb 16 22:23:32 crc kubenswrapper[4805]: I0216 22:23:32.831818 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e7b0acc2-1c23-4182-85ca-3ab0293b64a0/aodh-evaluator/0.log" Feb 16 22:23:32 crc kubenswrapper[4805]: I0216 22:23:32.996025 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5958664456-5mzsf_e742c4b3-4b27-4dd3-bbf7-8a005f496802/barbican-api/0.log" Feb 16 22:23:33 crc kubenswrapper[4805]: I0216 22:23:33.049825 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5958664456-5mzsf_e742c4b3-4b27-4dd3-bbf7-8a005f496802/barbican-api-log/0.log" Feb 16 22:23:33 crc kubenswrapper[4805]: I0216 22:23:33.071589 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e7b0acc2-1c23-4182-85ca-3ab0293b64a0/aodh-notifier/0.log" Feb 16 22:23:33 crc kubenswrapper[4805]: I0216 22:23:33.217238 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f5cbfc9c8-dwmdk_6fa81bfa-8c27-4546-9c30-1c52781a7ecb/barbican-keystone-listener/0.log" Feb 16 22:23:33 crc kubenswrapper[4805]: I0216 22:23:33.313128 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f5cbfc9c8-dwmdk_6fa81bfa-8c27-4546-9c30-1c52781a7ecb/barbican-keystone-listener-log/0.log" Feb 16 22:23:33 crc kubenswrapper[4805]: I0216 22:23:33.420071 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d48f79d95-n857j_9503d6c3-cc2c-4a51-89c7-33339db1da77/barbican-worker/0.log" Feb 16 22:23:33 crc kubenswrapper[4805]: I0216 22:23:33.492389 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d48f79d95-n857j_9503d6c3-cc2c-4a51-89c7-33339db1da77/barbican-worker-log/0.log" Feb 16 22:23:33 crc kubenswrapper[4805]: I0216 22:23:33.657510 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-224bc_90fd8fac-cdc0-402a-bb3d-746e06e28b6a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:33 crc kubenswrapper[4805]: I0216 22:23:33.814261 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2bbe998-2ee6-4b84-b723-42b1c4381ebc/ceilometer-notification-agent/0.log" Feb 16 22:23:33 crc kubenswrapper[4805]: I0216 22:23:33.898176 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2bbe998-2ee6-4b84-b723-42b1c4381ebc/proxy-httpd/0.log" Feb 16 22:23:34 crc kubenswrapper[4805]: I0216 22:23:34.101915 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2bbe998-2ee6-4b84-b723-42b1c4381ebc/sg-core/0.log" Feb 16 22:23:34 crc kubenswrapper[4805]: I0216 22:23:34.285515 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f241b99d-b7d7-4897-9cfa-bd3201582861/cinder-api/0.log" Feb 16 22:23:34 crc kubenswrapper[4805]: I0216 22:23:34.356564 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f241b99d-b7d7-4897-9cfa-bd3201582861/cinder-api-log/0.log" Feb 16 22:23:34 crc kubenswrapper[4805]: I0216 22:23:34.451473 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_23560f03-f6f6-48a5-9d10-b797e3d8042e/cinder-scheduler/0.log" Feb 16 22:23:34 crc kubenswrapper[4805]: I0216 22:23:34.507708 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_23560f03-f6f6-48a5-9d10-b797e3d8042e/probe/0.log" Feb 16 22:23:34 crc kubenswrapper[4805]: I0216 22:23:34.596553 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-fncpn_a710016e-8c14-45a3-b4c5-2b11b3fecd2a/init/0.log" Feb 16 22:23:34 crc kubenswrapper[4805]: I0216 22:23:34.777503 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-fncpn_a710016e-8c14-45a3-b4c5-2b11b3fecd2a/init/0.log" Feb 16 22:23:34 crc kubenswrapper[4805]: I0216 22:23:34.792963 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-fncpn_a710016e-8c14-45a3-b4c5-2b11b3fecd2a/dnsmasq-dns/0.log" Feb 16 22:23:34 crc kubenswrapper[4805]: I0216 22:23:34.850083 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2njwx_f7abc29d-8762-4f66-9b74-5bae943250ee/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:35 crc kubenswrapper[4805]: I0216 22:23:35.085807 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-4v99m_54cd5193-d167-4eaa-86bf-3e5ca7a7703a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:35 crc kubenswrapper[4805]: I0216 22:23:35.144404 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8msrm_712b6325-4e7e-4557-ba00-fdab4a8e3f79/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:35 crc kubenswrapper[4805]: I0216 22:23:35.371973 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rlb2g_9a751413-e386-4261-bcb7-830a111a4399/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:35 crc kubenswrapper[4805]: I0216 22:23:35.372027 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-98zrl_d937b07f-01b5-4ac1-8cc7-c1db2e1876bb/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:35 crc kubenswrapper[4805]: I0216 22:23:35.589322 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sdxlm_92a3c856-2ffd-4e1b-9178-81719ac447f5/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:35 crc kubenswrapper[4805]: I0216 22:23:35.692518 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wrlpk_fe35a496-fcca-49d1-92f0-1356c05feb2b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:35 crc kubenswrapper[4805]: I0216 22:23:35.886242 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e698d49d-5318-412e-98aa-1b979e265892/glance-httpd/0.log" Feb 16 22:23:35 crc kubenswrapper[4805]: I0216 22:23:35.937006 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e698d49d-5318-412e-98aa-1b979e265892/glance-log/0.log" Feb 16 22:23:35 crc kubenswrapper[4805]: I0216 22:23:35.952585 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3bef306d-96b1-4442-a34e-b6e8aa67ec62/glance-httpd/0.log" Feb 16 22:23:36 crc kubenswrapper[4805]: I0216 22:23:36.001545 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3bef306d-96b1-4442-a34e-b6e8aa67ec62/glance-log/0.log" Feb 16 22:23:36 crc kubenswrapper[4805]: I0216 22:23:36.511520 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-55c677d475-j7xgs_3f0af068-e25b-4fd8-aa7b-9898e0341869/heat-engine/0.log" Feb 16 22:23:36 crc kubenswrapper[4805]: I0216 22:23:36.599625 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:23:36 crc kubenswrapper[4805]: E0216 22:23:36.600530 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:23:36 crc kubenswrapper[4805]: I0216 22:23:36.815018 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-748d64cf47-dqzh6_973fa704-45c8-4ebf-8517-ea1c878cfce9/heat-cfnapi/0.log" Feb 16 22:23:36 crc kubenswrapper[4805]: I0216 22:23:36.873052 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-747f5c598c-x2pl7_52d69bb9-6a6c-4f70-8319-730e54f0e66a/keystone-api/0.log" Feb 16 22:23:36 crc kubenswrapper[4805]: I0216 22:23:36.880043 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7587bc9c56-x54w7_12fe6368-7dd3-443c-a135-328753625d21/heat-api/0.log" Feb 16 22:23:37 crc kubenswrapper[4805]: I0216 22:23:37.049921 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29521321-x6zk9_cd502f9b-caad-477e-8f1e-82567d04366f/keystone-cron/0.log" Feb 16 22:23:37 crc kubenswrapper[4805]: I0216 22:23:37.190886 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2699bf95-c138-4388-9aca-256620ea3458/kube-state-metrics/0.log" Feb 16 22:23:37 crc kubenswrapper[4805]: I0216 22:23:37.508997 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_2a1dcf12-f32a-4822-9458-aa0a10e4afbf/mysqld-exporter/0.log" Feb 16 22:23:37 crc kubenswrapper[4805]: I0216 22:23:37.568697 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8fbb985b9-2x2rd_37233461-85b7-4069-885f-b5a1ac819473/neutron-api/0.log" Feb 16 22:23:37 crc kubenswrapper[4805]: E0216 22:23:37.599626 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:23:37 crc kubenswrapper[4805]: I0216 22:23:37.656703 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8fbb985b9-2x2rd_37233461-85b7-4069-885f-b5a1ac819473/neutron-httpd/0.log" Feb 16 22:23:37 crc kubenswrapper[4805]: I0216 22:23:37.946570 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_203349b1-a943-4795-ad7a-b5bd48435b86/nova-api-log/0.log" Feb 16 22:23:38 crc kubenswrapper[4805]: I0216 22:23:38.205572 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_14dbc69b-9207-42a2-becf-d09dc88763cf/nova-cell0-conductor-conductor/0.log" Feb 16 22:23:38 crc kubenswrapper[4805]: I0216 22:23:38.262984 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_203349b1-a943-4795-ad7a-b5bd48435b86/nova-api-api/0.log" Feb 16 22:23:38 crc kubenswrapper[4805]: I0216 22:23:38.295614 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f3312fbc-9e01-40d8-b648-89d1c8747aad/nova-cell1-conductor-conductor/0.log" Feb 16 22:23:38 crc kubenswrapper[4805]: E0216 22:23:38.599102 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:23:38 crc kubenswrapper[4805]: I0216 22:23:38.686881 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3ec05129-5695-43b4-af95-d5335dc56879/nova-metadata-log/0.log" Feb 16 22:23:38 crc kubenswrapper[4805]: I0216 22:23:38.703779 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1958022f-e55d-473a-8a90-1c3238569c9c/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 22:23:39 crc kubenswrapper[4805]: I0216 22:23:39.066015 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0818f43c-cd3e-4a45-9970-d9efedc87f5b/nova-scheduler-scheduler/0.log" Feb 16 22:23:39 crc kubenswrapper[4805]: I0216 22:23:39.104150 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_26f1c84d-9566-4135-a24a-ce299c76a102/mysql-bootstrap/0.log" Feb 16 22:23:39 crc kubenswrapper[4805]: I0216 22:23:39.296953 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_26f1c84d-9566-4135-a24a-ce299c76a102/mysql-bootstrap/0.log" Feb 16 22:23:39 crc kubenswrapper[4805]: I0216 22:23:39.342836 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_26f1c84d-9566-4135-a24a-ce299c76a102/galera/0.log" Feb 16 22:23:39 crc kubenswrapper[4805]: I0216 22:23:39.555447 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8b9deffe-ab6a-46d4-a463-9ed81e6f3889/mysql-bootstrap/0.log" Feb 16 22:23:39 crc kubenswrapper[4805]: I0216 22:23:39.722004 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8b9deffe-ab6a-46d4-a463-9ed81e6f3889/mysql-bootstrap/0.log" Feb 16 22:23:39 crc kubenswrapper[4805]: I0216 22:23:39.795986 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8b9deffe-ab6a-46d4-a463-9ed81e6f3889/galera/0.log" Feb 16 22:23:39 crc kubenswrapper[4805]: I0216 22:23:39.955035 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1d4f5d67-11fe-406b-ac3d-48fb09f5a513/openstackclient/0.log" Feb 16 22:23:40 crc kubenswrapper[4805]: I0216 22:23:40.130839 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hch2z_45d56588-d2f3-4207-8338-c39de08d752b/openstack-network-exporter/0.log" Feb 16 22:23:40 crc kubenswrapper[4805]: I0216 22:23:40.546862 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ntwbd_127a1d16-9779-4760-88eb-28d61312ef0f/ovn-controller/0.log" Feb 16 22:23:40 crc kubenswrapper[4805]: I0216 22:23:40.550040 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3ec05129-5695-43b4-af95-d5335dc56879/nova-metadata-metadata/0.log" Feb 16 22:23:40 crc kubenswrapper[4805]: I0216 22:23:40.715099 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jtmkd_faacbcd6-a65d-46c0-9173-f96b12b74793/ovsdb-server-init/0.log" Feb 16 22:23:40 crc kubenswrapper[4805]: I0216 22:23:40.877453 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jtmkd_faacbcd6-a65d-46c0-9173-f96b12b74793/ovsdb-server-init/0.log" Feb 16 22:23:40 crc kubenswrapper[4805]: I0216 22:23:40.927050 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jtmkd_faacbcd6-a65d-46c0-9173-f96b12b74793/ovs-vswitchd/0.log" Feb 16 22:23:40 crc kubenswrapper[4805]: I0216 22:23:40.943966 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jtmkd_faacbcd6-a65d-46c0-9173-f96b12b74793/ovsdb-server/0.log" Feb 16 22:23:41 crc kubenswrapper[4805]: I0216 22:23:41.150176 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_532c871c-9fef-4023-a49c-ef44566659ff/openstack-network-exporter/0.log" Feb 16 22:23:41 crc kubenswrapper[4805]: I0216 22:23:41.162074 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_532c871c-9fef-4023-a49c-ef44566659ff/ovn-northd/0.log" Feb 16 22:23:41 crc kubenswrapper[4805]: I0216 22:23:41.394192 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_081f6c0e-a934-4a00-8be2-8bc55acb9585/openstack-network-exporter/0.log" Feb 16 22:23:41 crc kubenswrapper[4805]: I0216 22:23:41.468209 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_081f6c0e-a934-4a00-8be2-8bc55acb9585/ovsdbserver-nb/0.log" Feb 16 22:23:41 crc kubenswrapper[4805]: I0216 22:23:41.607247 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_01a7894c-709c-47a8-990f-b051e2199694/openstack-network-exporter/0.log" Feb 16 22:23:41 crc kubenswrapper[4805]: I0216 22:23:41.628751 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_01a7894c-709c-47a8-990f-b051e2199694/ovsdbserver-sb/0.log" Feb 16 22:23:41 crc kubenswrapper[4805]: I0216 22:23:41.798189 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f8cfbb668-2nz5c_1044e6f4-8331-45b2-b130-aee982e7c595/placement-api/0.log" Feb 16 22:23:41 crc kubenswrapper[4805]: I0216 22:23:41.863496 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f8cfbb668-2nz5c_1044e6f4-8331-45b2-b130-aee982e7c595/placement-log/0.log" Feb 16 22:23:41 crc kubenswrapper[4805]: I0216 22:23:41.950923 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c6769912-8cfc-48b8-b709-5398ca380e38/init-config-reloader/0.log" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.150875 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c6769912-8cfc-48b8-b709-5398ca380e38/config-reloader/0.log" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.155270 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c6769912-8cfc-48b8-b709-5398ca380e38/init-config-reloader/0.log" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.218656 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c6769912-8cfc-48b8-b709-5398ca380e38/thanos-sidecar/0.log" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.314380 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c6769912-8cfc-48b8-b709-5398ca380e38/prometheus/0.log" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.502236 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ee307678-615e-4eaf-be4c-6e44e3a31f27/setup-container/0.log" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.658439 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ee307678-615e-4eaf-be4c-6e44e3a31f27/setup-container/0.log" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.674752 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ee307678-615e-4eaf-be4c-6e44e3a31f27/rabbitmq/0.log" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.756451 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_46463b23-6dbc-4d91-8942-687596251b5b/setup-container/0.log" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.891809 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wxq4m"] Feb 16 22:23:42 crc kubenswrapper[4805]: E0216 22:23:42.892476 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558259c-1dad-488e-84ea-173c4d64846f" containerName="registry-server" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.892498 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558259c-1dad-488e-84ea-173c4d64846f" containerName="registry-server" Feb 16 22:23:42 crc kubenswrapper[4805]: E0216 22:23:42.892541 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558259c-1dad-488e-84ea-173c4d64846f" containerName="extract-utilities" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.892550 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558259c-1dad-488e-84ea-173c4d64846f" containerName="extract-utilities" Feb 16 22:23:42 crc kubenswrapper[4805]: E0216 22:23:42.892578 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558259c-1dad-488e-84ea-173c4d64846f" containerName="extract-content" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.892587 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558259c-1dad-488e-84ea-173c4d64846f" containerName="extract-content" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.892967 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b558259c-1dad-488e-84ea-173c4d64846f" containerName="registry-server" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.895217 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.927249 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxq4m"] Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.942817 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7b8z\" (UniqueName: \"kubernetes.io/projected/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-kube-api-access-m7b8z\") pod \"redhat-operators-wxq4m\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.942882 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-catalog-content\") pod \"redhat-operators-wxq4m\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:42 crc kubenswrapper[4805]: I0216 22:23:42.943041 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-utilities\") pod \"redhat-operators-wxq4m\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.045446 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-utilities\") pod \"redhat-operators-wxq4m\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.045539 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7b8z\" (UniqueName: \"kubernetes.io/projected/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-kube-api-access-m7b8z\") pod \"redhat-operators-wxq4m\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.045590 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-catalog-content\") pod \"redhat-operators-wxq4m\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.046189 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-catalog-content\") pod \"redhat-operators-wxq4m\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.046394 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-utilities\") pod \"redhat-operators-wxq4m\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.062423 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3d3db43a-846e-4b7b-b5ae-5711dc76477f/setup-container/0.log" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.066057 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_46463b23-6dbc-4d91-8942-687596251b5b/setup-container/0.log" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.071465 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7b8z\" (UniqueName: \"kubernetes.io/projected/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-kube-api-access-m7b8z\") pod \"redhat-operators-wxq4m\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.192496 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_46463b23-6dbc-4d91-8942-687596251b5b/rabbitmq/0.log" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.219333 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.501583 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3d3db43a-846e-4b7b-b5ae-5711dc76477f/setup-container/0.log" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.513128 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3d3db43a-846e-4b7b-b5ae-5711dc76477f/rabbitmq/0.log" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.552493 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_57bae43a-529b-4748-8a58-63b1a1c6db10/setup-container/0.log" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.782057 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxq4m"] Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.862915 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_57bae43a-529b-4748-8a58-63b1a1c6db10/setup-container/0.log" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.912599 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5bj87_d713a0aa-87d9-4550-80a8-9b661ef5c585/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:43 crc kubenswrapper[4805]: I0216 22:23:43.932052 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_57bae43a-529b-4748-8a58-63b1a1c6db10/rabbitmq/0.log" Feb 16 22:23:44 crc kubenswrapper[4805]: I0216 22:23:44.232130 4805 generic.go:334] "Generic (PLEG): container finished" podID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerID="ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1" exitCode=0 Feb 16 22:23:44 crc kubenswrapper[4805]: I0216 22:23:44.232382 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxq4m" event={"ID":"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b","Type":"ContainerDied","Data":"ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1"} Feb 16 22:23:44 crc kubenswrapper[4805]: I0216 22:23:44.232407 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxq4m" event={"ID":"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b","Type":"ContainerStarted","Data":"4cb46c9b8fac94a28698127071a73b9733416f3767c289b825b2b50f6c010e9f"} Feb 16 22:23:44 crc kubenswrapper[4805]: I0216 22:23:44.430324 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7688d557bc-2jgzd_95ea5d76-aedb-4a0a-a03d-fdc9140265e4/proxy-httpd/0.log" Feb 16 22:23:44 crc kubenswrapper[4805]: I0216 22:23:44.452482 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8thwd_8938f803-35ca-4231-81e3-fbf996af4142/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 22:23:44 crc kubenswrapper[4805]: I0216 22:23:44.668664 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7688d557bc-2jgzd_95ea5d76-aedb-4a0a-a03d-fdc9140265e4/proxy-server/0.log" Feb 16 22:23:44 crc kubenswrapper[4805]: I0216 22:23:44.905992 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-tgvc9_8f409f3b-50d6-47b1-9abb-e90ba2cc03ab/swift-ring-rebalance/0.log" Feb 16 22:23:44 crc kubenswrapper[4805]: I0216 22:23:44.942275 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/account-auditor/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.134807 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/account-reaper/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.180584 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/account-replicator/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.190698 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/account-server/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.245353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxq4m" event={"ID":"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b","Type":"ContainerStarted","Data":"e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c"} Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.254219 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/container-auditor/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.364523 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/container-replicator/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.419765 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/container-server/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.451755 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/container-updater/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.566469 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/object-expirer/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.571695 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/object-auditor/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.697493 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/object-replicator/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.702464 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/object-server/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.791172 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/rsync/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.794303 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/object-updater/0.log" Feb 16 22:23:45 crc kubenswrapper[4805]: I0216 22:23:45.922048 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b51bad1e-99c6-4e2b-ae2b-c7e338ef235e/swift-recon-cron/0.log" Feb 16 22:23:48 crc kubenswrapper[4805]: E0216 22:23:48.599447 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:23:49 crc kubenswrapper[4805]: I0216 22:23:49.599089 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:23:51 crc kubenswrapper[4805]: I0216 22:23:51.320986 4805 generic.go:334] "Generic (PLEG): container finished" podID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerID="e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c" exitCode=0 Feb 16 22:23:51 crc kubenswrapper[4805]: I0216 22:23:51.321065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxq4m" event={"ID":"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b","Type":"ContainerDied","Data":"e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c"} Feb 16 22:23:51 crc kubenswrapper[4805]: I0216 22:23:51.328510 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"d90cb53c820da42f245d8d47ebd494fcd16f9b063cd125b56ccd5fdeaf264f12"} Feb 16 22:23:51 crc kubenswrapper[4805]: E0216 22:23:51.600213 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:23:51 crc kubenswrapper[4805]: I0216 22:23:51.678274 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_63c31b7f-0d91-4d04-87b2-2f85a7baf260/memcached/0.log" Feb 16 22:23:52 crc kubenswrapper[4805]: I0216 22:23:52.365400 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxq4m" event={"ID":"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b","Type":"ContainerStarted","Data":"9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad"} Feb 16 22:23:52 crc kubenswrapper[4805]: I0216 22:23:52.387434 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wxq4m" podStartSLOduration=2.683306306 podStartE2EDuration="10.387416934s" podCreationTimestamp="2026-02-16 22:23:42 +0000 UTC" firstStartedPulling="2026-02-16 22:23:44.238934507 +0000 UTC m=+5242.057617802" lastFinishedPulling="2026-02-16 22:23:51.943045135 +0000 UTC m=+5249.761728430" observedRunningTime="2026-02-16 22:23:52.384602838 +0000 UTC m=+5250.203286143" watchObservedRunningTime="2026-02-16 22:23:52.387416934 +0000 UTC m=+5250.206100229" Feb 16 22:23:53 crc kubenswrapper[4805]: I0216 22:23:53.220479 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:53 crc kubenswrapper[4805]: I0216 22:23:53.221021 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:23:54 crc kubenswrapper[4805]: I0216 22:23:54.318338 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wxq4m" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerName="registry-server" probeResult="failure" output=< Feb 16 22:23:54 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 22:23:54 crc kubenswrapper[4805]: > Feb 16 22:23:59 crc kubenswrapper[4805]: E0216 22:23:59.601001 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:24:02 crc kubenswrapper[4805]: E0216 22:24:02.599387 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:24:03 crc kubenswrapper[4805]: I0216 22:24:03.281829 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:24:03 crc kubenswrapper[4805]: I0216 22:24:03.344793 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:24:03 crc kubenswrapper[4805]: I0216 22:24:03.521741 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxq4m"] Feb 16 22:24:04 crc kubenswrapper[4805]: I0216 22:24:04.525936 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wxq4m" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerName="registry-server" containerID="cri-o://9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad" gracePeriod=2 Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.127875 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.164101 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-utilities\") pod \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.164158 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-catalog-content\") pod \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.164216 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7b8z\" (UniqueName: \"kubernetes.io/projected/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-kube-api-access-m7b8z\") pod \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\" (UID: \"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b\") " Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.166751 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-utilities" (OuterVolumeSpecName: "utilities") pod "d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" (UID: "d2d9c35d-5cfa-4b2d-b211-833d32a72d1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.170552 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-kube-api-access-m7b8z" (OuterVolumeSpecName: "kube-api-access-m7b8z") pod "d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" (UID: "d2d9c35d-5cfa-4b2d-b211-833d32a72d1b"). InnerVolumeSpecName "kube-api-access-m7b8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.267321 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.267543 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7b8z\" (UniqueName: \"kubernetes.io/projected/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-kube-api-access-m7b8z\") on node \"crc\" DevicePath \"\"" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.311575 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" (UID: "d2d9c35d-5cfa-4b2d-b211-833d32a72d1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.369769 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.538496 4805 generic.go:334] "Generic (PLEG): container finished" podID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerID="9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad" exitCode=0 Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.538558 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxq4m" event={"ID":"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b","Type":"ContainerDied","Data":"9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad"} Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.538617 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxq4m" event={"ID":"d2d9c35d-5cfa-4b2d-b211-833d32a72d1b","Type":"ContainerDied","Data":"4cb46c9b8fac94a28698127071a73b9733416f3767c289b825b2b50f6c010e9f"} Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.538637 4805 scope.go:117] "RemoveContainer" containerID="9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.538574 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxq4m" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.576405 4805 scope.go:117] "RemoveContainer" containerID="e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.582406 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxq4m"] Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.594512 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wxq4m"] Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.598296 4805 scope.go:117] "RemoveContainer" containerID="ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.617808 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" path="/var/lib/kubelet/pods/d2d9c35d-5cfa-4b2d-b211-833d32a72d1b/volumes" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.654833 4805 scope.go:117] "RemoveContainer" containerID="9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad" Feb 16 22:24:05 crc kubenswrapper[4805]: E0216 22:24:05.655348 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad\": container with ID starting with 9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad not found: ID does not exist" containerID="9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.655412 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad"} err="failed to get container status \"9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad\": rpc error: code = NotFound desc = could not find container \"9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad\": container with ID starting with 9ed405da0ba7eb4f873c0fd95a81e2bcf6684323a536f41d6e00c46e3d6970ad not found: ID does not exist" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.655452 4805 scope.go:117] "RemoveContainer" containerID="e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c" Feb 16 22:24:05 crc kubenswrapper[4805]: E0216 22:24:05.655788 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c\": container with ID starting with e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c not found: ID does not exist" containerID="e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.655823 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c"} err="failed to get container status \"e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c\": rpc error: code = NotFound desc = could not find container \"e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c\": container with ID starting with e7e28c14de573b8e6f3dc5307491a9b7dd4c9d76f5c27b8946341e904505e59c not found: ID does not exist" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.655843 4805 scope.go:117] "RemoveContainer" containerID="ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1" Feb 16 22:24:05 crc kubenswrapper[4805]: E0216 22:24:05.656105 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1\": container with ID starting with ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1 not found: ID does not exist" containerID="ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1" Feb 16 22:24:05 crc kubenswrapper[4805]: I0216 22:24:05.656136 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1"} err="failed to get container status \"ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1\": rpc error: code = NotFound desc = could not find container \"ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1\": container with ID starting with ddc747ccfde6c106e3d29b918333f5081e55e6178fec2ab92ab8a02c961c34f1 not found: ID does not exist" Feb 16 22:24:14 crc kubenswrapper[4805]: E0216 22:24:14.599769 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:24:14 crc kubenswrapper[4805]: E0216 22:24:14.599877 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:24:16 crc kubenswrapper[4805]: I0216 22:24:16.964031 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz_3110bc98-6c48-4dac-a96d-14ab481061c1/util/0.log" Feb 16 22:24:17 crc kubenswrapper[4805]: I0216 22:24:17.226594 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz_3110bc98-6c48-4dac-a96d-14ab481061c1/util/0.log" Feb 16 22:24:17 crc kubenswrapper[4805]: I0216 22:24:17.260310 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz_3110bc98-6c48-4dac-a96d-14ab481061c1/pull/0.log" Feb 16 22:24:17 crc kubenswrapper[4805]: I0216 22:24:17.450965 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz_3110bc98-6c48-4dac-a96d-14ab481061c1/pull/0.log" Feb 16 22:24:17 crc kubenswrapper[4805]: I0216 22:24:17.618636 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz_3110bc98-6c48-4dac-a96d-14ab481061c1/util/0.log" Feb 16 22:24:17 crc kubenswrapper[4805]: I0216 22:24:17.674038 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz_3110bc98-6c48-4dac-a96d-14ab481061c1/pull/0.log" Feb 16 22:24:17 crc kubenswrapper[4805]: I0216 22:24:17.830296 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d0e80f6e18b61f13075506fa08f3d184634c45b7d207c6c8c071e4c76emf9kz_3110bc98-6c48-4dac-a96d-14ab481061c1/extract/0.log" Feb 16 22:24:18 crc kubenswrapper[4805]: I0216 22:24:18.007328 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-jtxhs_5b0862c2-4070-4639-94cc-c29e08f49bf1/manager/0.log" Feb 16 22:24:18 crc kubenswrapper[4805]: I0216 22:24:18.346234 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-djjn2_21218328-0794-4bb6-aa02-2bb8fa48f6b9/manager/0.log" Feb 16 22:24:18 crc kubenswrapper[4805]: I0216 22:24:18.920238 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-77f85_f2b71132-ee94-4b2a-ad19-ab9dde9013ef/manager/0.log" Feb 16 22:24:19 crc kubenswrapper[4805]: I0216 22:24:19.047811 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-rwjb2_e38af58d-0049-4d9c-a834-ca048c0b171f/manager/0.log" Feb 16 22:24:19 crc kubenswrapper[4805]: I0216 22:24:19.599031 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-4xgqc_431105e4-6dfd-4644-ae7a-521284b98eda/manager/0.log" Feb 16 22:24:19 crc kubenswrapper[4805]: I0216 22:24:19.856471 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-dm6f6_e48823c7-c98e-447b-b539-1ce95bd2d3ba/manager/0.log" Feb 16 22:24:19 crc kubenswrapper[4805]: I0216 22:24:19.860818 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-s2t59_a745e178-a8a5-4f2b-b9bd-ad41a35f6140/manager/0.log" Feb 16 22:24:20 crc kubenswrapper[4805]: I0216 22:24:20.124350 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-2mzjb_2840ffe3-d3c1-4faf-bb32-f9c17173713f/manager/0.log" Feb 16 22:24:20 crc kubenswrapper[4805]: I0216 22:24:20.282677 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-dghtj_6bb4da12-019d-4101-a5eb-e0c85421d029/manager/0.log" Feb 16 22:24:20 crc kubenswrapper[4805]: I0216 22:24:20.758047 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-mvwth_856ddca1-f396-432a-b33a-9fa0c1611e29/manager/0.log" Feb 16 22:24:21 crc kubenswrapper[4805]: I0216 22:24:21.102626 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-66xx2_f43b76e3-db2c-40f4-80fa-77ed9f196cf5/manager/0.log" Feb 16 22:24:21 crc kubenswrapper[4805]: I0216 22:24:21.294650 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-lzgrn_8131c3df-9b2e-48f7-95c9-95a8d5ba9f69/manager/0.log" Feb 16 22:24:21 crc kubenswrapper[4805]: I0216 22:24:21.616257 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c4sqn4_32fb9648-24e5-4073-902e-f76ea1eaa512/manager/0.log" Feb 16 22:24:22 crc kubenswrapper[4805]: I0216 22:24:22.007917 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7dd97cff99-jm66g_6cead8ad-a49a-4a9c-9491-99ec351f9bbe/operator/0.log" Feb 16 22:24:22 crc kubenswrapper[4805]: I0216 22:24:22.936481 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nlsn9_8b4db840-b551-49a7-b35b-b6a52a9e78a2/registry-server/0.log" Feb 16 22:24:23 crc kubenswrapper[4805]: I0216 22:24:23.227068 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-fsr75_3205b9a4-589f-4200-9e47-a073f38397c1/manager/0.log" Feb 16 22:24:23 crc kubenswrapper[4805]: I0216 22:24:23.515456 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-s54qb_745993b9-7ebe-405b-9242-a561ed40c3a7/manager/0.log" Feb 16 22:24:23 crc kubenswrapper[4805]: I0216 22:24:23.714968 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-scwrg_209db403-57f8-46b8-9ca3-0986c81dd9c0/operator/0.log" Feb 16 22:24:23 crc kubenswrapper[4805]: I0216 22:24:23.976161 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-nv9kv_b64a3a78-cbf6-44ce-a7f2-7955af1d6e04/manager/0.log" Feb 16 22:24:24 crc kubenswrapper[4805]: I0216 22:24:24.121387 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86b9cf86d-ntqc8_6cf33838-78c1-40de-9089-f68fbe14ea86/manager/0.log" Feb 16 22:24:24 crc kubenswrapper[4805]: I0216 22:24:24.347135 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-h547j_7c325ae7-03f7-4d40-a4b2-9fd7a10b98bf/manager/0.log" Feb 16 22:24:24 crc kubenswrapper[4805]: I0216 22:24:24.620369 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-bg92w_55ee298b-d2cf-460f-b540-e748a09f81f0/manager/0.log" Feb 16 22:24:24 crc kubenswrapper[4805]: I0216 22:24:24.714760 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7d4dd64c87-45rd7_549bee15-d4bb-43c2-af22-1bdbf4e66b78/manager/0.log" Feb 16 22:24:24 crc kubenswrapper[4805]: I0216 22:24:24.786147 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-kt9zs_856ad725-988a-44f6-8cb1-57ff2498e192/manager/0.log" Feb 16 22:24:26 crc kubenswrapper[4805]: E0216 22:24:26.600469 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:24:29 crc kubenswrapper[4805]: E0216 22:24:29.600731 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:24:29 crc kubenswrapper[4805]: I0216 22:24:29.777037 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-cdzwg_5bc499f8-3fb7-4e12-bb4c-1e903e0c4333/manager/0.log" Feb 16 22:24:38 crc kubenswrapper[4805]: E0216 22:24:38.600155 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:24:41 crc kubenswrapper[4805]: E0216 22:24:41.600923 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:24:50 crc kubenswrapper[4805]: I0216 22:24:50.623905 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9zzrz_5075d111-78c5-40b6-8b8e-1e5ce57d943b/control-plane-machine-set-operator/0.log" Feb 16 22:24:50 crc kubenswrapper[4805]: I0216 22:24:50.860018 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r4r7b_5f9120dc-89fa-43b6-b757-925e25598369/kube-rbac-proxy/0.log" Feb 16 22:24:50 crc kubenswrapper[4805]: I0216 22:24:50.884094 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r4r7b_5f9120dc-89fa-43b6-b757-925e25598369/machine-api-operator/0.log" Feb 16 22:24:53 crc kubenswrapper[4805]: E0216 22:24:53.611111 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:24:56 crc kubenswrapper[4805]: E0216 22:24:56.601708 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:25:06 crc kubenswrapper[4805]: I0216 22:25:06.732241 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7g79d_d8c90994-bbc1-48cc-8663-0fee9997a85c/cert-manager-controller/0.log" Feb 16 22:25:06 crc kubenswrapper[4805]: I0216 22:25:06.924872 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-pcjsg_7854ef0f-6654-4e1d-960f-3accb2997f48/cert-manager-cainjector/0.log" Feb 16 22:25:07 crc kubenswrapper[4805]: I0216 22:25:07.007225 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-78zrw_73c1222f-9f42-429a-8764-0193764d37bb/cert-manager-webhook/0.log" Feb 16 22:25:08 crc kubenswrapper[4805]: E0216 22:25:08.602817 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:25:11 crc kubenswrapper[4805]: E0216 22:25:11.600117 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:25:22 crc kubenswrapper[4805]: I0216 22:25:22.506653 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-grmth_4f9115b9-3cc9-44f1-bf72-3141429a5001/nmstate-console-plugin/0.log" Feb 16 22:25:22 crc kubenswrapper[4805]: E0216 22:25:22.602401 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:25:22 crc kubenswrapper[4805]: I0216 22:25:22.748472 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-x5dj7_339cbe11-a64b-4a7f-b5fc-4f2136c6dfac/nmstate-handler/0.log" Feb 16 22:25:22 crc kubenswrapper[4805]: I0216 22:25:22.804480 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-mbrjj_f676a01e-1cc1-482c-933b-5312fed324e2/kube-rbac-proxy/0.log" Feb 16 22:25:22 crc kubenswrapper[4805]: I0216 22:25:22.838478 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-mbrjj_f676a01e-1cc1-482c-933b-5312fed324e2/nmstate-metrics/0.log" Feb 16 22:25:22 crc kubenswrapper[4805]: I0216 22:25:22.954544 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-js2b9_79b41a3f-fa2e-4d38-872b-17744c7ef23e/nmstate-operator/0.log" Feb 16 22:25:23 crc kubenswrapper[4805]: I0216 22:25:23.028090 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-nsw6x_91f381c7-9be0-48df-8f9d-1a708710e670/nmstate-webhook/0.log" Feb 16 22:25:24 crc kubenswrapper[4805]: E0216 22:25:24.600715 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:25:33 crc kubenswrapper[4805]: E0216 22:25:33.608268 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:25:37 crc kubenswrapper[4805]: E0216 22:25:37.599764 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:25:38 crc kubenswrapper[4805]: I0216 22:25:38.197933 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c4778c849-ds7n7_efa9b8e9-54a3-4740-9f0e-391521f3ed25/kube-rbac-proxy/0.log" Feb 16 22:25:38 crc kubenswrapper[4805]: I0216 22:25:38.236958 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c4778c849-ds7n7_efa9b8e9-54a3-4740-9f0e-391521f3ed25/manager/0.log" Feb 16 22:25:45 crc kubenswrapper[4805]: E0216 22:25:45.603247 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:25:48 crc kubenswrapper[4805]: E0216 22:25:48.602008 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:25:55 crc kubenswrapper[4805]: I0216 22:25:55.836206 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6g7x4_79bf21e6-60c9-4788-a02f-8efb828dc8ef/prometheus-operator/0.log" Feb 16 22:25:55 crc kubenswrapper[4805]: I0216 22:25:55.885203 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_111775fe-ccc4-4b93-9fcf-5a9bd115788c/prometheus-operator-admission-webhook/0.log" Feb 16 22:25:56 crc kubenswrapper[4805]: I0216 22:25:56.029687 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_22284904-7391-4eb6-9ef7-adf068c3d7ec/prometheus-operator-admission-webhook/0.log" Feb 16 22:25:56 crc kubenswrapper[4805]: I0216 22:25:56.133683 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4q24b_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8/operator/0.log" Feb 16 22:25:56 crc kubenswrapper[4805]: I0216 22:25:56.269611 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-98vxk_e83ff69c-bdd9-42c7-9404-eb267edb67b5/observability-ui-dashboards/0.log" Feb 16 22:25:56 crc kubenswrapper[4805]: I0216 22:25:56.342445 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-jpnk2_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3/perses-operator/0.log" Feb 16 22:25:58 crc kubenswrapper[4805]: E0216 22:25:58.600166 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:26:00 crc kubenswrapper[4805]: E0216 22:26:00.600257 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:26:08 crc kubenswrapper[4805]: I0216 22:26:08.099596 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:26:08 crc kubenswrapper[4805]: I0216 22:26:08.101923 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:26:11 crc kubenswrapper[4805]: E0216 22:26:11.601842 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:26:12 crc kubenswrapper[4805]: I0216 22:26:12.581363 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-gpnwn_2d0f2c52-868b-4753-9d83-9d7204ea6d2d/cluster-logging-operator/0.log" Feb 16 22:26:12 crc kubenswrapper[4805]: E0216 22:26:12.600081 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:26:12 crc kubenswrapper[4805]: I0216 22:26:12.755066 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-5cssm_37b50f94-8a93-4a7b-bb7c-7ce9cc6e669a/collector/0.log" Feb 16 22:26:12 crc kubenswrapper[4805]: I0216 22:26:12.841323 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_446ae7b1-0d4c-43f4-9580-8b0472211510/loki-compactor/0.log" Feb 16 22:26:12 crc kubenswrapper[4805]: I0216 22:26:12.938144 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-4rxpw_26464e34-2dcc-45f5-a73a-94fd7fa041b8/loki-distributor/0.log" Feb 16 22:26:12 crc kubenswrapper[4805]: I0216 22:26:12.993628 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85cf5dc48c-4fck8_95dd962a-260e-4c6d-9e07-c5b99377f3e5/gateway/0.log" Feb 16 22:26:13 crc kubenswrapper[4805]: I0216 22:26:13.050358 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85cf5dc48c-4fck8_95dd962a-260e-4c6d-9e07-c5b99377f3e5/opa/0.log" Feb 16 22:26:13 crc kubenswrapper[4805]: I0216 22:26:13.161225 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85cf5dc48c-dwk5b_c26f79ee-1643-4837-a0af-94910dafc8a7/opa/0.log" Feb 16 22:26:13 crc kubenswrapper[4805]: I0216 22:26:13.223969 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85cf5dc48c-dwk5b_c26f79ee-1643-4837-a0af-94910dafc8a7/gateway/0.log" Feb 16 22:26:13 crc kubenswrapper[4805]: I0216 22:26:13.304004 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_d94ec43e-3a93-49d2-aa96-c781442f21cd/loki-index-gateway/0.log" Feb 16 22:26:13 crc kubenswrapper[4805]: I0216 22:26:13.461369 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_dae1f136-5edb-45c8-bba0-e6a50c1f8084/loki-ingester/0.log" Feb 16 22:26:13 crc kubenswrapper[4805]: I0216 22:26:13.510087 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-5j28p_5b58cbb2-c2de-4f33-a1ea-344729a67d13/loki-querier/0.log" Feb 16 22:26:13 crc kubenswrapper[4805]: I0216 22:26:13.632357 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-x4j4k_31af895c-b793-4e0f-bae7-031db6fe786f/loki-query-frontend/0.log" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.396141 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6qh6p"] Feb 16 22:26:21 crc kubenswrapper[4805]: E0216 22:26:21.397791 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerName="registry-server" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.397811 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerName="registry-server" Feb 16 22:26:21 crc kubenswrapper[4805]: E0216 22:26:21.397909 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerName="extract-content" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.397923 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerName="extract-content" Feb 16 22:26:21 crc kubenswrapper[4805]: E0216 22:26:21.397999 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerName="extract-utilities" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.398031 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerName="extract-utilities" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.398490 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d9c35d-5cfa-4b2d-b211-833d32a72d1b" containerName="registry-server" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.402083 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.412948 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qh6p"] Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.419494 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-catalog-content\") pod \"redhat-marketplace-6qh6p\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.419769 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-utilities\") pod \"redhat-marketplace-6qh6p\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.419895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkn2q\" (UniqueName: \"kubernetes.io/projected/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-kube-api-access-hkn2q\") pod \"redhat-marketplace-6qh6p\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.522581 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkn2q\" (UniqueName: \"kubernetes.io/projected/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-kube-api-access-hkn2q\") pod \"redhat-marketplace-6qh6p\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.522675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-catalog-content\") pod \"redhat-marketplace-6qh6p\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.522901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-utilities\") pod \"redhat-marketplace-6qh6p\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.523579 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-utilities\") pod \"redhat-marketplace-6qh6p\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.523631 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-catalog-content\") pod \"redhat-marketplace-6qh6p\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.564528 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkn2q\" (UniqueName: \"kubernetes.io/projected/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-kube-api-access-hkn2q\") pod \"redhat-marketplace-6qh6p\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:21 crc kubenswrapper[4805]: I0216 22:26:21.757381 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:22 crc kubenswrapper[4805]: I0216 22:26:22.269416 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qh6p"] Feb 16 22:26:23 crc kubenswrapper[4805]: I0216 22:26:23.207164 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerID="6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f" exitCode=0 Feb 16 22:26:23 crc kubenswrapper[4805]: I0216 22:26:23.207611 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qh6p" event={"ID":"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5","Type":"ContainerDied","Data":"6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f"} Feb 16 22:26:23 crc kubenswrapper[4805]: I0216 22:26:23.207652 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qh6p" event={"ID":"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5","Type":"ContainerStarted","Data":"99504ed4c3b9e53e391c2d55ccb94d8328aabbc0bd83b8a06af9730fffdc825e"} Feb 16 22:26:24 crc kubenswrapper[4805]: E0216 22:26:24.601456 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:26:25 crc kubenswrapper[4805]: I0216 22:26:25.235149 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerID="2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986" exitCode=0 Feb 16 22:26:25 crc kubenswrapper[4805]: I0216 22:26:25.235192 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qh6p" event={"ID":"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5","Type":"ContainerDied","Data":"2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986"} Feb 16 22:26:26 crc kubenswrapper[4805]: I0216 22:26:26.248148 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qh6p" event={"ID":"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5","Type":"ContainerStarted","Data":"a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b"} Feb 16 22:26:26 crc kubenswrapper[4805]: I0216 22:26:26.269627 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6qh6p" podStartSLOduration=2.85124281 podStartE2EDuration="5.269592587s" podCreationTimestamp="2026-02-16 22:26:21 +0000 UTC" firstStartedPulling="2026-02-16 22:26:23.211194872 +0000 UTC m=+5401.029878207" lastFinishedPulling="2026-02-16 22:26:25.629544679 +0000 UTC m=+5403.448227984" observedRunningTime="2026-02-16 22:26:26.267408228 +0000 UTC m=+5404.086091523" watchObservedRunningTime="2026-02-16 22:26:26.269592587 +0000 UTC m=+5404.088275882" Feb 16 22:26:26 crc kubenswrapper[4805]: E0216 22:26:26.599088 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:26:30 crc kubenswrapper[4805]: I0216 22:26:30.745257 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-bbdrb_df2d01e8-01e1-48db-96d7-1ef79c926d5a/kube-rbac-proxy/0.log" Feb 16 22:26:30 crc kubenswrapper[4805]: I0216 22:26:30.905881 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-bbdrb_df2d01e8-01e1-48db-96d7-1ef79c926d5a/controller/0.log" Feb 16 22:26:31 crc kubenswrapper[4805]: I0216 22:26:31.018984 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-frr-files/0.log" Feb 16 22:26:31 crc kubenswrapper[4805]: I0216 22:26:31.761668 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:31 crc kubenswrapper[4805]: I0216 22:26:31.762081 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:31 crc kubenswrapper[4805]: I0216 22:26:31.821826 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-metrics/0.log" Feb 16 22:26:31 crc kubenswrapper[4805]: I0216 22:26:31.823233 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-frr-files/0.log" Feb 16 22:26:31 crc kubenswrapper[4805]: I0216 22:26:31.825931 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:31 crc kubenswrapper[4805]: I0216 22:26:31.843889 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-reloader/0.log" Feb 16 22:26:31 crc kubenswrapper[4805]: I0216 22:26:31.868495 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-reloader/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.044584 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-metrics/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.051220 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-frr-files/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.074388 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-metrics/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.089626 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-reloader/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.266973 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-reloader/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.286548 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-metrics/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.292097 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/controller/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.296360 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/cp-frr-files/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.367384 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.423205 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qh6p"] Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.478677 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/frr-metrics/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.523557 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/kube-rbac-proxy/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.604905 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/kube-rbac-proxy-frr/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.701795 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/reloader/0.log" Feb 16 22:26:32 crc kubenswrapper[4805]: I0216 22:26:32.829253 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-7spzb_0b9f819d-da9a-4b13-b0fb-70e11f25fb3f/frr-k8s-webhook-server/0.log" Feb 16 22:26:33 crc kubenswrapper[4805]: I0216 22:26:33.027512 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7765589444-2hjkq_e255f1c2-9a99-44b8-830f-56015433f783/manager/0.log" Feb 16 22:26:33 crc kubenswrapper[4805]: I0216 22:26:33.212496 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7978c795d6-h8bpt_c56fcb42-00d2-410a-9aec-183240413d1c/webhook-server/0.log" Feb 16 22:26:33 crc kubenswrapper[4805]: I0216 22:26:33.355478 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9ttm2_a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289/kube-rbac-proxy/0.log" Feb 16 22:26:34 crc kubenswrapper[4805]: I0216 22:26:34.005634 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9ttm2_a5b5e1ab-7b1e-4e78-9db3-a86ba41a1289/speaker/0.log" Feb 16 22:26:34 crc kubenswrapper[4805]: I0216 22:26:34.268427 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-87c7n_b729a8ff-87a7-4ed1-9af8-d2da4849e89c/frr/0.log" Feb 16 22:26:34 crc kubenswrapper[4805]: I0216 22:26:34.330187 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6qh6p" podUID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerName="registry-server" containerID="cri-o://a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b" gracePeriod=2 Feb 16 22:26:34 crc kubenswrapper[4805]: I0216 22:26:34.920695 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.113001 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkn2q\" (UniqueName: \"kubernetes.io/projected/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-kube-api-access-hkn2q\") pod \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.113095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-utilities\") pod \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.113152 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-catalog-content\") pod \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\" (UID: \"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5\") " Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.113896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-utilities" (OuterVolumeSpecName: "utilities") pod "5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" (UID: "5c0c6292-6f28-4e3e-a489-b38d6c18c2c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.119281 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-kube-api-access-hkn2q" (OuterVolumeSpecName: "kube-api-access-hkn2q") pod "5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" (UID: "5c0c6292-6f28-4e3e-a489-b38d6c18c2c5"). InnerVolumeSpecName "kube-api-access-hkn2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.146147 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" (UID: "5c0c6292-6f28-4e3e-a489-b38d6c18c2c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.216731 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkn2q\" (UniqueName: \"kubernetes.io/projected/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-kube-api-access-hkn2q\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.216987 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.217082 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.343390 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerID="a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b" exitCode=0 Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.343448 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qh6p" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.343449 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qh6p" event={"ID":"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5","Type":"ContainerDied","Data":"a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b"} Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.343647 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qh6p" event={"ID":"5c0c6292-6f28-4e3e-a489-b38d6c18c2c5","Type":"ContainerDied","Data":"99504ed4c3b9e53e391c2d55ccb94d8328aabbc0bd83b8a06af9730fffdc825e"} Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.343691 4805 scope.go:117] "RemoveContainer" containerID="a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.378909 4805 scope.go:117] "RemoveContainer" containerID="2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.385531 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qh6p"] Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.395779 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qh6p"] Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.411787 4805 scope.go:117] "RemoveContainer" containerID="6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.476217 4805 scope.go:117] "RemoveContainer" containerID="a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b" Feb 16 22:26:35 crc kubenswrapper[4805]: E0216 22:26:35.476599 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b\": container with ID starting with a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b not found: ID does not exist" containerID="a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.476646 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b"} err="failed to get container status \"a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b\": rpc error: code = NotFound desc = could not find container \"a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b\": container with ID starting with a04a1411d9d0ad0b80e80068a0b3ad921839b5c87c4e37a54b7627639493901b not found: ID does not exist" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.476671 4805 scope.go:117] "RemoveContainer" containerID="2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986" Feb 16 22:26:35 crc kubenswrapper[4805]: E0216 22:26:35.477281 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986\": container with ID starting with 2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986 not found: ID does not exist" containerID="2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.477336 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986"} err="failed to get container status \"2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986\": rpc error: code = NotFound desc = could not find container \"2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986\": container with ID starting with 2412885a21689bff892092618f6fdc9f66d1cf4a977c71aaf1801fe0cce31986 not found: ID does not exist" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.477351 4805 scope.go:117] "RemoveContainer" containerID="6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f" Feb 16 22:26:35 crc kubenswrapper[4805]: E0216 22:26:35.477707 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f\": container with ID starting with 6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f not found: ID does not exist" containerID="6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.477786 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f"} err="failed to get container status \"6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f\": rpc error: code = NotFound desc = could not find container \"6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f\": container with ID starting with 6851713a9553a82a149817927d274b27f2b55b6799373043f50177e51fbe2f6f not found: ID does not exist" Feb 16 22:26:35 crc kubenswrapper[4805]: I0216 22:26:35.613957 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" path="/var/lib/kubelet/pods/5c0c6292-6f28-4e3e-a489-b38d6c18c2c5/volumes" Feb 16 22:26:38 crc kubenswrapper[4805]: I0216 22:26:38.099942 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:26:38 crc kubenswrapper[4805]: I0216 22:26:38.100626 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:26:38 crc kubenswrapper[4805]: E0216 22:26:38.599767 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:26:40 crc kubenswrapper[4805]: E0216 22:26:40.600903 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:26:47 crc kubenswrapper[4805]: I0216 22:26:47.993281 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62_5ede007c-534d-4702-8d57-307734558aff/util/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.121354 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62_5ede007c-534d-4702-8d57-307734558aff/util/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.133407 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62_5ede007c-534d-4702-8d57-307734558aff/pull/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.176398 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62_5ede007c-534d-4702-8d57-307734558aff/pull/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.378921 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62_5ede007c-534d-4702-8d57-307734558aff/util/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.406411 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62_5ede007c-534d-4702-8d57-307734558aff/pull/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.414123 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19c9x62_5ede007c-534d-4702-8d57-307734558aff/extract/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.557084 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8_3a65bf60-dac3-485c-83ed-cd7900050692/util/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.745714 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8_3a65bf60-dac3-485c-83ed-cd7900050692/pull/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.809505 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8_3a65bf60-dac3-485c-83ed-cd7900050692/util/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.838265 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8_3a65bf60-dac3-485c-83ed-cd7900050692/pull/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.971299 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8_3a65bf60-dac3-485c-83ed-cd7900050692/pull/0.log" Feb 16 22:26:48 crc kubenswrapper[4805]: I0216 22:26:48.995819 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8_3a65bf60-dac3-485c-83ed-cd7900050692/extract/0.log" Feb 16 22:26:49 crc kubenswrapper[4805]: I0216 22:26:49.028957 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tfdc8_3a65bf60-dac3-485c-83ed-cd7900050692/util/0.log" Feb 16 22:26:49 crc kubenswrapper[4805]: I0216 22:26:49.283298 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42_75a7675d-39b3-49c2-8ffb-bcec428f29b3/util/0.log" Feb 16 22:26:49 crc kubenswrapper[4805]: E0216 22:26:49.599310 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:26:49 crc kubenswrapper[4805]: I0216 22:26:49.605993 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42_75a7675d-39b3-49c2-8ffb-bcec428f29b3/pull/0.log" Feb 16 22:26:49 crc kubenswrapper[4805]: I0216 22:26:49.709966 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42_75a7675d-39b3-49c2-8ffb-bcec428f29b3/util/0.log" Feb 16 22:26:49 crc kubenswrapper[4805]: I0216 22:26:49.746709 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42_75a7675d-39b3-49c2-8ffb-bcec428f29b3/pull/0.log" Feb 16 22:26:49 crc kubenswrapper[4805]: I0216 22:26:49.911271 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42_75a7675d-39b3-49c2-8ffb-bcec428f29b3/extract/0.log" Feb 16 22:26:49 crc kubenswrapper[4805]: I0216 22:26:49.914014 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42_75a7675d-39b3-49c2-8ffb-bcec428f29b3/util/0.log" Feb 16 22:26:49 crc kubenswrapper[4805]: I0216 22:26:49.956467 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134zm42_75a7675d-39b3-49c2-8ffb-bcec428f29b3/pull/0.log" Feb 16 22:26:50 crc kubenswrapper[4805]: I0216 22:26:50.107148 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-smkdm_e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3/extract-utilities/0.log" Feb 16 22:26:50 crc kubenswrapper[4805]: I0216 22:26:50.368216 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-smkdm_e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3/extract-content/0.log" Feb 16 22:26:50 crc kubenswrapper[4805]: I0216 22:26:50.385785 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-smkdm_e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3/extract-utilities/0.log" Feb 16 22:26:50 crc kubenswrapper[4805]: I0216 22:26:50.413936 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-smkdm_e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3/extract-content/0.log" Feb 16 22:26:50 crc kubenswrapper[4805]: I0216 22:26:50.624900 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-smkdm_e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3/extract-utilities/0.log" Feb 16 22:26:50 crc kubenswrapper[4805]: I0216 22:26:50.715148 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-smkdm_e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3/extract-content/0.log" Feb 16 22:26:50 crc kubenswrapper[4805]: I0216 22:26:50.916204 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-48dqc_b392345c-7432-4562-a35a-5205eea9e26a/extract-utilities/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.108739 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-48dqc_b392345c-7432-4562-a35a-5205eea9e26a/extract-content/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.137413 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-48dqc_b392345c-7432-4562-a35a-5205eea9e26a/extract-content/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.194577 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-48dqc_b392345c-7432-4562-a35a-5205eea9e26a/extract-utilities/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.198777 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-smkdm_e94bdd0a-ba2a-4c34-8228-ee8aabe99ca3/registry-server/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.394114 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-48dqc_b392345c-7432-4562-a35a-5205eea9e26a/extract-content/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.454223 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-48dqc_b392345c-7432-4562-a35a-5205eea9e26a/extract-utilities/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.615962 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw_6785263a-326f-4912-b4bc-c1cea001e2a9/util/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.860958 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw_6785263a-326f-4912-b4bc-c1cea001e2a9/pull/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.891174 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw_6785263a-326f-4912-b4bc-c1cea001e2a9/util/0.log" Feb 16 22:26:51 crc kubenswrapper[4805]: I0216 22:26:51.969340 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw_6785263a-326f-4912-b4bc-c1cea001e2a9/pull/0.log" Feb 16 22:26:52 crc kubenswrapper[4805]: I0216 22:26:52.333633 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-48dqc_b392345c-7432-4562-a35a-5205eea9e26a/registry-server/0.log" Feb 16 22:26:52 crc kubenswrapper[4805]: I0216 22:26:52.702651 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw_6785263a-326f-4912-b4bc-c1cea001e2a9/extract/0.log" Feb 16 22:26:52 crc kubenswrapper[4805]: I0216 22:26:52.733122 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr_8ad99527-987d-443d-b6c4-4abc6fd5fd72/util/0.log" Feb 16 22:26:52 crc kubenswrapper[4805]: I0216 22:26:52.752384 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw_6785263a-326f-4912-b4bc-c1cea001e2a9/pull/0.log" Feb 16 22:26:52 crc kubenswrapper[4805]: I0216 22:26:52.762941 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989gp4tw_6785263a-326f-4912-b4bc-c1cea001e2a9/util/0.log" Feb 16 22:26:52 crc kubenswrapper[4805]: I0216 22:26:52.890489 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr_8ad99527-987d-443d-b6c4-4abc6fd5fd72/util/0.log" Feb 16 22:26:52 crc kubenswrapper[4805]: I0216 22:26:52.914251 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr_8ad99527-987d-443d-b6c4-4abc6fd5fd72/pull/0.log" Feb 16 22:26:52 crc kubenswrapper[4805]: I0216 22:26:52.932193 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr_8ad99527-987d-443d-b6c4-4abc6fd5fd72/pull/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.093407 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr_8ad99527-987d-443d-b6c4-4abc6fd5fd72/util/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.119143 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr_8ad99527-987d-443d-b6c4-4abc6fd5fd72/extract/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.154985 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecab9rfr_8ad99527-987d-443d-b6c4-4abc6fd5fd72/pull/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.190207 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6cb4l_14dd9df6-740d-4d6b-90cc-f62d0cb76f4d/marketplace-operator/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.336433 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r68dk_5b4fd91e-cf72-4fe2-9a42-078567fe7782/extract-utilities/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.501420 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r68dk_5b4fd91e-cf72-4fe2-9a42-078567fe7782/extract-content/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.521652 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r68dk_5b4fd91e-cf72-4fe2-9a42-078567fe7782/extract-content/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.536111 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r68dk_5b4fd91e-cf72-4fe2-9a42-078567fe7782/extract-utilities/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.676108 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r68dk_5b4fd91e-cf72-4fe2-9a42-078567fe7782/extract-utilities/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.691885 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r68dk_5b4fd91e-cf72-4fe2-9a42-078567fe7782/extract-content/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.719033 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f2pbp_58acc124-20af-4ab2-90ea-26cbdfe3b6eb/extract-utilities/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.912450 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r68dk_5b4fd91e-cf72-4fe2-9a42-078567fe7782/registry-server/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.977230 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f2pbp_58acc124-20af-4ab2-90ea-26cbdfe3b6eb/extract-utilities/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.977877 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f2pbp_58acc124-20af-4ab2-90ea-26cbdfe3b6eb/extract-content/0.log" Feb 16 22:26:53 crc kubenswrapper[4805]: I0216 22:26:53.993707 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f2pbp_58acc124-20af-4ab2-90ea-26cbdfe3b6eb/extract-content/0.log" Feb 16 22:26:54 crc kubenswrapper[4805]: I0216 22:26:54.800227 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f2pbp_58acc124-20af-4ab2-90ea-26cbdfe3b6eb/extract-utilities/0.log" Feb 16 22:26:54 crc kubenswrapper[4805]: I0216 22:26:54.805130 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f2pbp_58acc124-20af-4ab2-90ea-26cbdfe3b6eb/extract-content/0.log" Feb 16 22:26:55 crc kubenswrapper[4805]: I0216 22:26:55.572034 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f2pbp_58acc124-20af-4ab2-90ea-26cbdfe3b6eb/registry-server/0.log" Feb 16 22:26:55 crc kubenswrapper[4805]: E0216 22:26:55.600026 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:27:02 crc kubenswrapper[4805]: E0216 22:27:02.602911 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:27:07 crc kubenswrapper[4805]: I0216 22:27:07.935614 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-846988ff46-2gttv_111775fe-ccc4-4b93-9fcf-5a9bd115788c/prometheus-operator-admission-webhook/0.log" Feb 16 22:27:07 crc kubenswrapper[4805]: I0216 22:27:07.936156 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-846988ff46-cvjn2_22284904-7391-4eb6-9ef7-adf068c3d7ec/prometheus-operator-admission-webhook/0.log" Feb 16 22:27:07 crc kubenswrapper[4805]: I0216 22:27:07.996501 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6g7x4_79bf21e6-60c9-4788-a02f-8efb828dc8ef/prometheus-operator/0.log" Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.099647 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.099719 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.099792 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.101006 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d90cb53c820da42f245d8d47ebd494fcd16f9b063cd125b56ccd5fdeaf264f12"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.101085 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://d90cb53c820da42f245d8d47ebd494fcd16f9b063cd125b56ccd5fdeaf264f12" gracePeriod=600 Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.135073 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-98vxk_e83ff69c-bdd9-42c7-9404-eb267edb67b5/observability-ui-dashboards/0.log" Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.181117 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-jpnk2_2aa5e9f0-6cd0-4b5b-a0c0-25b4774156f3/perses-operator/0.log" Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.194785 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4q24b_6a1ac07e-7ca8-4dc1-8b65-7a985ace28e8/operator/0.log" Feb 16 22:27:08 crc kubenswrapper[4805]: E0216 22:27:08.599026 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.693575 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="d90cb53c820da42f245d8d47ebd494fcd16f9b063cd125b56ccd5fdeaf264f12" exitCode=0 Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.693620 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"d90cb53c820da42f245d8d47ebd494fcd16f9b063cd125b56ccd5fdeaf264f12"} Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.693654 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535"} Feb 16 22:27:08 crc kubenswrapper[4805]: I0216 22:27:08.693678 4805 scope.go:117] "RemoveContainer" containerID="e2f9b5e48c1a6044af8d1f35b03286d0b2f9ea8d14b80488de0b12c329eb4a45" Feb 16 22:27:17 crc kubenswrapper[4805]: E0216 22:27:17.606785 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:27:20 crc kubenswrapper[4805]: E0216 22:27:20.601594 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:27:22 crc kubenswrapper[4805]: I0216 22:27:22.631058 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c4778c849-ds7n7_efa9b8e9-54a3-4740-9f0e-391521f3ed25/kube-rbac-proxy/0.log" Feb 16 22:27:23 crc kubenswrapper[4805]: I0216 22:27:22.689845 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c4778c849-ds7n7_efa9b8e9-54a3-4740-9f0e-391521f3ed25/manager/0.log" Feb 16 22:27:31 crc kubenswrapper[4805]: I0216 22:27:31.605314 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:27:31 crc kubenswrapper[4805]: E0216 22:27:31.734465 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:27:31 crc kubenswrapper[4805]: E0216 22:27:31.734526 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:27:31 crc kubenswrapper[4805]: E0216 22:27:31.734657 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:27:31 crc kubenswrapper[4805]: E0216 22:27:31.735887 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.343782 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wqgr6"] Feb 16 22:27:34 crc kubenswrapper[4805]: E0216 22:27:34.344980 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerName="extract-utilities" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.345000 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerName="extract-utilities" Feb 16 22:27:34 crc kubenswrapper[4805]: E0216 22:27:34.345037 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerName="registry-server" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.345046 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerName="registry-server" Feb 16 22:27:34 crc kubenswrapper[4805]: E0216 22:27:34.345081 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerName="extract-content" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.345089 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerName="extract-content" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.345349 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0c6292-6f28-4e3e-a489-b38d6c18c2c5" containerName="registry-server" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.347309 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.373683 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqgr6"] Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.518128 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2b7n\" (UniqueName: \"kubernetes.io/projected/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-kube-api-access-s2b7n\") pod \"community-operators-wqgr6\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.518173 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-utilities\") pod \"community-operators-wqgr6\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.518232 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-catalog-content\") pod \"community-operators-wqgr6\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: E0216 22:27:34.599488 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.620318 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2b7n\" (UniqueName: \"kubernetes.io/projected/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-kube-api-access-s2b7n\") pod \"community-operators-wqgr6\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.620371 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-utilities\") pod \"community-operators-wqgr6\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.620436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-catalog-content\") pod \"community-operators-wqgr6\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.621066 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-catalog-content\") pod \"community-operators-wqgr6\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.621530 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-utilities\") pod \"community-operators-wqgr6\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.667632 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2b7n\" (UniqueName: \"kubernetes.io/projected/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-kube-api-access-s2b7n\") pod \"community-operators-wqgr6\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:34 crc kubenswrapper[4805]: I0216 22:27:34.729004 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:35 crc kubenswrapper[4805]: I0216 22:27:35.385267 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqgr6"] Feb 16 22:27:35 crc kubenswrapper[4805]: I0216 22:27:35.999158 4805 generic.go:334] "Generic (PLEG): container finished" podID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerID="667ae61cc636e79c702d097523853497e64fd2f8b15d10d3d242aa660c957727" exitCode=0 Feb 16 22:27:35 crc kubenswrapper[4805]: I0216 22:27:35.999669 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqgr6" event={"ID":"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af","Type":"ContainerDied","Data":"667ae61cc636e79c702d097523853497e64fd2f8b15d10d3d242aa660c957727"} Feb 16 22:27:35 crc kubenswrapper[4805]: I0216 22:27:35.999832 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqgr6" event={"ID":"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af","Type":"ContainerStarted","Data":"1ed6f755c826b08b6e87c894d29c382a21c9c048b38e90579a4dc4345ee1b174"} Feb 16 22:27:37 crc kubenswrapper[4805]: I0216 22:27:37.012078 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqgr6" event={"ID":"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af","Type":"ContainerStarted","Data":"08196a677cecdfcc4ab66a6b8cf8f1707a4c9d43f9c4663a1e7ac6fb7d73fade"} Feb 16 22:27:39 crc kubenswrapper[4805]: I0216 22:27:39.034967 4805 generic.go:334] "Generic (PLEG): container finished" podID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerID="08196a677cecdfcc4ab66a6b8cf8f1707a4c9d43f9c4663a1e7ac6fb7d73fade" exitCode=0 Feb 16 22:27:39 crc kubenswrapper[4805]: I0216 22:27:39.035225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqgr6" event={"ID":"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af","Type":"ContainerDied","Data":"08196a677cecdfcc4ab66a6b8cf8f1707a4c9d43f9c4663a1e7ac6fb7d73fade"} Feb 16 22:27:41 crc kubenswrapper[4805]: I0216 22:27:41.070678 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqgr6" event={"ID":"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af","Type":"ContainerStarted","Data":"dbfbc51313628031f9a8a9e0cd88c717611cee279962b7d3a2b868d785c526f7"} Feb 16 22:27:41 crc kubenswrapper[4805]: I0216 22:27:41.094945 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wqgr6" podStartSLOduration=3.53630857 podStartE2EDuration="7.094929483s" podCreationTimestamp="2026-02-16 22:27:34 +0000 UTC" firstStartedPulling="2026-02-16 22:27:36.00124238 +0000 UTC m=+5473.819925675" lastFinishedPulling="2026-02-16 22:27:39.559863293 +0000 UTC m=+5477.378546588" observedRunningTime="2026-02-16 22:27:41.090503174 +0000 UTC m=+5478.909186489" watchObservedRunningTime="2026-02-16 22:27:41.094929483 +0000 UTC m=+5478.913612778" Feb 16 22:27:44 crc kubenswrapper[4805]: E0216 22:27:44.599009 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:27:44 crc kubenswrapper[4805]: I0216 22:27:44.729883 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:44 crc kubenswrapper[4805]: I0216 22:27:44.729975 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:44 crc kubenswrapper[4805]: I0216 22:27:44.778487 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:45 crc kubenswrapper[4805]: I0216 22:27:45.165374 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:45 crc kubenswrapper[4805]: I0216 22:27:45.261199 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqgr6"] Feb 16 22:27:47 crc kubenswrapper[4805]: I0216 22:27:47.127509 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wqgr6" podUID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerName="registry-server" containerID="cri-o://dbfbc51313628031f9a8a9e0cd88c717611cee279962b7d3a2b868d785c526f7" gracePeriod=2 Feb 16 22:27:47 crc kubenswrapper[4805]: E0216 22:27:47.759813 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:27:47 crc kubenswrapper[4805]: E0216 22:27:47.760181 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:27:47 crc kubenswrapper[4805]: E0216 22:27:47.760323 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:27:47 crc kubenswrapper[4805]: E0216 22:27:47.761534 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.144213 4805 generic.go:334] "Generic (PLEG): container finished" podID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerID="dbfbc51313628031f9a8a9e0cd88c717611cee279962b7d3a2b868d785c526f7" exitCode=0 Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.145322 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqgr6" event={"ID":"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af","Type":"ContainerDied","Data":"dbfbc51313628031f9a8a9e0cd88c717611cee279962b7d3a2b868d785c526f7"} Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.145412 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqgr6" event={"ID":"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af","Type":"ContainerDied","Data":"1ed6f755c826b08b6e87c894d29c382a21c9c048b38e90579a4dc4345ee1b174"} Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.145498 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed6f755c826b08b6e87c894d29c382a21c9c048b38e90579a4dc4345ee1b174" Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.403304 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.490301 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-utilities\") pod \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.490861 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2b7n\" (UniqueName: \"kubernetes.io/projected/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-kube-api-access-s2b7n\") pod \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.491005 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-catalog-content\") pod \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\" (UID: \"9dbc8706-4a65-4b2d-bea9-800dc0dcb4af\") " Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.491085 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-utilities" (OuterVolumeSpecName: "utilities") pod "9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" (UID: "9dbc8706-4a65-4b2d-bea9-800dc0dcb4af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.491758 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.506463 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-kube-api-access-s2b7n" (OuterVolumeSpecName: "kube-api-access-s2b7n") pod "9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" (UID: "9dbc8706-4a65-4b2d-bea9-800dc0dcb4af"). InnerVolumeSpecName "kube-api-access-s2b7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.570228 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" (UID: "9dbc8706-4a65-4b2d-bea9-800dc0dcb4af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.593953 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2b7n\" (UniqueName: \"kubernetes.io/projected/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-kube-api-access-s2b7n\") on node \"crc\" DevicePath \"\"" Feb 16 22:27:48 crc kubenswrapper[4805]: I0216 22:27:48.594245 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:27:49 crc kubenswrapper[4805]: I0216 22:27:49.158314 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqgr6" Feb 16 22:27:49 crc kubenswrapper[4805]: I0216 22:27:49.240773 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqgr6"] Feb 16 22:27:49 crc kubenswrapper[4805]: I0216 22:27:49.250891 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wqgr6"] Feb 16 22:27:49 crc kubenswrapper[4805]: I0216 22:27:49.611391 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" path="/var/lib/kubelet/pods/9dbc8706-4a65-4b2d-bea9-800dc0dcb4af/volumes" Feb 16 22:27:55 crc kubenswrapper[4805]: E0216 22:27:55.606343 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:27:59 crc kubenswrapper[4805]: E0216 22:27:59.605668 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:28:07 crc kubenswrapper[4805]: E0216 22:28:07.599402 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:28:14 crc kubenswrapper[4805]: E0216 22:28:14.603647 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:28:19 crc kubenswrapper[4805]: E0216 22:28:19.602326 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:28:25 crc kubenswrapper[4805]: E0216 22:28:25.612610 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:28:31 crc kubenswrapper[4805]: E0216 22:28:31.600318 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:28:37 crc kubenswrapper[4805]: E0216 22:28:37.599894 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:28:46 crc kubenswrapper[4805]: E0216 22:28:46.600160 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:28:51 crc kubenswrapper[4805]: E0216 22:28:51.600973 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:28:58 crc kubenswrapper[4805]: E0216 22:28:58.600749 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:29:05 crc kubenswrapper[4805]: E0216 22:29:05.612278 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:29:08 crc kubenswrapper[4805]: I0216 22:29:08.100097 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:29:08 crc kubenswrapper[4805]: I0216 22:29:08.100776 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:29:10 crc kubenswrapper[4805]: E0216 22:29:10.599598 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:29:13 crc kubenswrapper[4805]: I0216 22:29:13.186681 4805 generic.go:334] "Generic (PLEG): container finished" podID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" containerID="406677f5705d8a29370ef1b1387d601091bdfbb2566cd08face4e8aa2a19372b" exitCode=0 Feb 16 22:29:13 crc kubenswrapper[4805]: I0216 22:29:13.186784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p7xph/must-gather-mh9w9" event={"ID":"4ec087dc-4c20-4c0d-893d-f0ccaf92477e","Type":"ContainerDied","Data":"406677f5705d8a29370ef1b1387d601091bdfbb2566cd08face4e8aa2a19372b"} Feb 16 22:29:13 crc kubenswrapper[4805]: I0216 22:29:13.188513 4805 scope.go:117] "RemoveContainer" containerID="406677f5705d8a29370ef1b1387d601091bdfbb2566cd08face4e8aa2a19372b" Feb 16 22:29:13 crc kubenswrapper[4805]: I0216 22:29:13.387299 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p7xph_must-gather-mh9w9_4ec087dc-4c20-4c0d-893d-f0ccaf92477e/gather/0.log" Feb 16 22:29:18 crc kubenswrapper[4805]: E0216 22:29:18.602357 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.025923 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p7xph/must-gather-mh9w9"] Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.026597 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-p7xph/must-gather-mh9w9" podUID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" containerName="copy" containerID="cri-o://522a99826e0337f36cd4123134f69f92ae876458114fdffde9bf3ccaeb377b91" gracePeriod=2 Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.035880 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p7xph/must-gather-mh9w9"] Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.328214 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p7xph_must-gather-mh9w9_4ec087dc-4c20-4c0d-893d-f0ccaf92477e/copy/0.log" Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.329087 4805 generic.go:334] "Generic (PLEG): container finished" podID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" containerID="522a99826e0337f36cd4123134f69f92ae876458114fdffde9bf3ccaeb377b91" exitCode=143 Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.528536 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p7xph_must-gather-mh9w9_4ec087dc-4c20-4c0d-893d-f0ccaf92477e/copy/0.log" Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.529008 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.660772 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-must-gather-output\") pod \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\" (UID: \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\") " Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.660956 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82xp6\" (UniqueName: \"kubernetes.io/projected/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-kube-api-access-82xp6\") pod \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\" (UID: \"4ec087dc-4c20-4c0d-893d-f0ccaf92477e\") " Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.672889 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-kube-api-access-82xp6" (OuterVolumeSpecName: "kube-api-access-82xp6") pod "4ec087dc-4c20-4c0d-893d-f0ccaf92477e" (UID: "4ec087dc-4c20-4c0d-893d-f0ccaf92477e"). InnerVolumeSpecName "kube-api-access-82xp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.771376 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82xp6\" (UniqueName: \"kubernetes.io/projected/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-kube-api-access-82xp6\") on node \"crc\" DevicePath \"\"" Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.865286 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4ec087dc-4c20-4c0d-893d-f0ccaf92477e" (UID: "4ec087dc-4c20-4c0d-893d-f0ccaf92477e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:29:22 crc kubenswrapper[4805]: I0216 22:29:22.873064 4805 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4ec087dc-4c20-4c0d-893d-f0ccaf92477e-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 22:29:23 crc kubenswrapper[4805]: I0216 22:29:23.339862 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p7xph_must-gather-mh9w9_4ec087dc-4c20-4c0d-893d-f0ccaf92477e/copy/0.log" Feb 16 22:29:23 crc kubenswrapper[4805]: I0216 22:29:23.341758 4805 scope.go:117] "RemoveContainer" containerID="522a99826e0337f36cd4123134f69f92ae876458114fdffde9bf3ccaeb377b91" Feb 16 22:29:23 crc kubenswrapper[4805]: I0216 22:29:23.341906 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p7xph/must-gather-mh9w9" Feb 16 22:29:23 crc kubenswrapper[4805]: I0216 22:29:23.378394 4805 scope.go:117] "RemoveContainer" containerID="406677f5705d8a29370ef1b1387d601091bdfbb2566cd08face4e8aa2a19372b" Feb 16 22:29:23 crc kubenswrapper[4805]: I0216 22:29:23.615578 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" path="/var/lib/kubelet/pods/4ec087dc-4c20-4c0d-893d-f0ccaf92477e/volumes" Feb 16 22:29:25 crc kubenswrapper[4805]: E0216 22:29:25.599521 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:29:31 crc kubenswrapper[4805]: E0216 22:29:31.600487 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:29:38 crc kubenswrapper[4805]: I0216 22:29:38.099842 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:29:38 crc kubenswrapper[4805]: I0216 22:29:38.100476 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:29:38 crc kubenswrapper[4805]: E0216 22:29:38.600337 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:29:44 crc kubenswrapper[4805]: E0216 22:29:44.601845 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:29:51 crc kubenswrapper[4805]: E0216 22:29:51.601220 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:29:55 crc kubenswrapper[4805]: E0216 22:29:55.600621 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.174224 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq"] Feb 16 22:30:00 crc kubenswrapper[4805]: E0216 22:30:00.175258 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" containerName="gather" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.175273 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" containerName="gather" Feb 16 22:30:00 crc kubenswrapper[4805]: E0216 22:30:00.175303 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerName="extract-content" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.175312 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerName="extract-content" Feb 16 22:30:00 crc kubenswrapper[4805]: E0216 22:30:00.175332 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerName="registry-server" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.175341 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerName="registry-server" Feb 16 22:30:00 crc kubenswrapper[4805]: E0216 22:30:00.175383 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" containerName="copy" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.175392 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" containerName="copy" Feb 16 22:30:00 crc kubenswrapper[4805]: E0216 22:30:00.175409 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerName="extract-utilities" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.175418 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerName="extract-utilities" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.175673 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dbc8706-4a65-4b2d-bea9-800dc0dcb4af" containerName="registry-server" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.175739 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" containerName="copy" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.175760 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec087dc-4c20-4c0d-893d-f0ccaf92477e" containerName="gather" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.176702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.179276 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.194000 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.197666 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq"] Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.224514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-config-volume\") pod \"collect-profiles-29521350-5f8dq\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.224592 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lt2l\" (UniqueName: \"kubernetes.io/projected/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-kube-api-access-5lt2l\") pod \"collect-profiles-29521350-5f8dq\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.224949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-secret-volume\") pod \"collect-profiles-29521350-5f8dq\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.327392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-secret-volume\") pod \"collect-profiles-29521350-5f8dq\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.327645 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-config-volume\") pod \"collect-profiles-29521350-5f8dq\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.327672 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lt2l\" (UniqueName: \"kubernetes.io/projected/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-kube-api-access-5lt2l\") pod \"collect-profiles-29521350-5f8dq\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.328995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-config-volume\") pod \"collect-profiles-29521350-5f8dq\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.335165 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-secret-volume\") pod \"collect-profiles-29521350-5f8dq\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.348139 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lt2l\" (UniqueName: \"kubernetes.io/projected/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-kube-api-access-5lt2l\") pod \"collect-profiles-29521350-5f8dq\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:00 crc kubenswrapper[4805]: I0216 22:30:00.500469 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:01 crc kubenswrapper[4805]: I0216 22:30:01.181236 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq"] Feb 16 22:30:01 crc kubenswrapper[4805]: W0216 22:30:01.185895 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aac7e29_d7ab_4202_aece_ec4d1666bdb3.slice/crio-4e628c15ddb4af7f2a9496ad08f53c512a133ea1b1164b8c158e33fb8b114bb5 WatchSource:0}: Error finding container 4e628c15ddb4af7f2a9496ad08f53c512a133ea1b1164b8c158e33fb8b114bb5: Status 404 returned error can't find the container with id 4e628c15ddb4af7f2a9496ad08f53c512a133ea1b1164b8c158e33fb8b114bb5 Feb 16 22:30:01 crc kubenswrapper[4805]: I0216 22:30:01.850502 4805 generic.go:334] "Generic (PLEG): container finished" podID="7aac7e29-d7ab-4202-aece-ec4d1666bdb3" containerID="39a1d19fe7f9f941e94a9c41969932c3b11601b102a70b86f07ed49ec68ddc8e" exitCode=0 Feb 16 22:30:01 crc kubenswrapper[4805]: I0216 22:30:01.850609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" event={"ID":"7aac7e29-d7ab-4202-aece-ec4d1666bdb3","Type":"ContainerDied","Data":"39a1d19fe7f9f941e94a9c41969932c3b11601b102a70b86f07ed49ec68ddc8e"} Feb 16 22:30:01 crc kubenswrapper[4805]: I0216 22:30:01.850939 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" event={"ID":"7aac7e29-d7ab-4202-aece-ec4d1666bdb3","Type":"ContainerStarted","Data":"4e628c15ddb4af7f2a9496ad08f53c512a133ea1b1164b8c158e33fb8b114bb5"} Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.243969 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.324432 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-secret-volume\") pod \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.324643 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-config-volume\") pod \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.324852 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lt2l\" (UniqueName: \"kubernetes.io/projected/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-kube-api-access-5lt2l\") pod \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\" (UID: \"7aac7e29-d7ab-4202-aece-ec4d1666bdb3\") " Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.326603 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-config-volume" (OuterVolumeSpecName: "config-volume") pod "7aac7e29-d7ab-4202-aece-ec4d1666bdb3" (UID: "7aac7e29-d7ab-4202-aece-ec4d1666bdb3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.350907 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-kube-api-access-5lt2l" (OuterVolumeSpecName: "kube-api-access-5lt2l") pod "7aac7e29-d7ab-4202-aece-ec4d1666bdb3" (UID: "7aac7e29-d7ab-4202-aece-ec4d1666bdb3"). InnerVolumeSpecName "kube-api-access-5lt2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.352227 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7aac7e29-d7ab-4202-aece-ec4d1666bdb3" (UID: "7aac7e29-d7ab-4202-aece-ec4d1666bdb3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.428685 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.428893 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lt2l\" (UniqueName: \"kubernetes.io/projected/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-kube-api-access-5lt2l\") on node \"crc\" DevicePath \"\"" Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.428985 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aac7e29-d7ab-4202-aece-ec4d1666bdb3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:30:03 crc kubenswrapper[4805]: E0216 22:30:03.611195 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.879272 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" event={"ID":"7aac7e29-d7ab-4202-aece-ec4d1666bdb3","Type":"ContainerDied","Data":"4e628c15ddb4af7f2a9496ad08f53c512a133ea1b1164b8c158e33fb8b114bb5"} Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.879320 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e628c15ddb4af7f2a9496ad08f53c512a133ea1b1164b8c158e33fb8b114bb5" Feb 16 22:30:03 crc kubenswrapper[4805]: I0216 22:30:03.879590 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-5f8dq" Feb 16 22:30:04 crc kubenswrapper[4805]: I0216 22:30:04.334202 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm"] Feb 16 22:30:04 crc kubenswrapper[4805]: I0216 22:30:04.350386 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-fchzm"] Feb 16 22:30:05 crc kubenswrapper[4805]: I0216 22:30:05.614584 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a2300b-5c13-4318-a8b8-27ff7dad9fe7" path="/var/lib/kubelet/pods/65a2300b-5c13-4318-a8b8-27ff7dad9fe7/volumes" Feb 16 22:30:08 crc kubenswrapper[4805]: I0216 22:30:08.099965 4805 patch_prober.go:28] interesting pod/machine-config-daemon-gq8qd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:30:08 crc kubenswrapper[4805]: I0216 22:30:08.100635 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:30:08 crc kubenswrapper[4805]: I0216 22:30:08.100699 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" Feb 16 22:30:08 crc kubenswrapper[4805]: I0216 22:30:08.102080 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535"} pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:30:08 crc kubenswrapper[4805]: I0216 22:30:08.102190 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerName="machine-config-daemon" containerID="cri-o://b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" gracePeriod=600 Feb 16 22:30:08 crc kubenswrapper[4805]: E0216 22:30:08.237643 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:30:08 crc kubenswrapper[4805]: I0216 22:30:08.930273 4805 generic.go:334] "Generic (PLEG): container finished" podID="00c308fa-9d36-4fec-8717-6dbbe57523c6" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" exitCode=0 Feb 16 22:30:08 crc kubenswrapper[4805]: I0216 22:30:08.930363 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerDied","Data":"b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535"} Feb 16 22:30:08 crc kubenswrapper[4805]: I0216 22:30:08.930649 4805 scope.go:117] "RemoveContainer" containerID="d90cb53c820da42f245d8d47ebd494fcd16f9b063cd125b56ccd5fdeaf264f12" Feb 16 22:30:08 crc kubenswrapper[4805]: I0216 22:30:08.931175 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:30:08 crc kubenswrapper[4805]: E0216 22:30:08.931507 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:30:10 crc kubenswrapper[4805]: E0216 22:30:10.601202 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:30:17 crc kubenswrapper[4805]: E0216 22:30:17.599790 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:30:21 crc kubenswrapper[4805]: I0216 22:30:21.598639 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:30:21 crc kubenswrapper[4805]: E0216 22:30:21.599936 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:30:25 crc kubenswrapper[4805]: E0216 22:30:25.601870 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:30:29 crc kubenswrapper[4805]: E0216 22:30:29.601611 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:30:34 crc kubenswrapper[4805]: I0216 22:30:34.598929 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:30:34 crc kubenswrapper[4805]: E0216 22:30:34.601633 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:30:39 crc kubenswrapper[4805]: I0216 22:30:39.108098 4805 scope.go:117] "RemoveContainer" containerID="e32c1f8c5d9291b38f9735491ecb0030aa76d5175d6c7d3d6fef0f8f8911eae4" Feb 16 22:30:40 crc kubenswrapper[4805]: E0216 22:30:40.599424 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:30:41 crc kubenswrapper[4805]: E0216 22:30:41.602753 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:30:49 crc kubenswrapper[4805]: I0216 22:30:49.598239 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:30:49 crc kubenswrapper[4805]: E0216 22:30:49.599299 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:30:52 crc kubenswrapper[4805]: E0216 22:30:52.601572 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:30:54 crc kubenswrapper[4805]: E0216 22:30:54.601886 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:31:03 crc kubenswrapper[4805]: I0216 22:31:03.606660 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:31:03 crc kubenswrapper[4805]: E0216 22:31:03.607432 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:31:04 crc kubenswrapper[4805]: E0216 22:31:04.599803 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:31:09 crc kubenswrapper[4805]: E0216 22:31:09.601410 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:31:15 crc kubenswrapper[4805]: E0216 22:31:15.603358 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:31:18 crc kubenswrapper[4805]: I0216 22:31:18.598949 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:31:18 crc kubenswrapper[4805]: E0216 22:31:18.600235 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:31:21 crc kubenswrapper[4805]: E0216 22:31:21.603577 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:31:30 crc kubenswrapper[4805]: E0216 22:31:30.599749 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:31:31 crc kubenswrapper[4805]: I0216 22:31:31.599780 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:31:31 crc kubenswrapper[4805]: E0216 22:31:31.600267 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:31:33 crc kubenswrapper[4805]: E0216 22:31:33.600979 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:31:42 crc kubenswrapper[4805]: I0216 22:31:42.599246 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:31:42 crc kubenswrapper[4805]: E0216 22:31:42.602926 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:31:43 crc kubenswrapper[4805]: E0216 22:31:43.613448 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:31:44 crc kubenswrapper[4805]: E0216 22:31:44.600892 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:31:54 crc kubenswrapper[4805]: E0216 22:31:54.600427 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:31:55 crc kubenswrapper[4805]: E0216 22:31:55.600326 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:31:57 crc kubenswrapper[4805]: I0216 22:31:57.598211 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:31:57 crc kubenswrapper[4805]: E0216 22:31:57.598977 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:32:06 crc kubenswrapper[4805]: E0216 22:32:06.600289 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:32:06 crc kubenswrapper[4805]: E0216 22:32:06.600409 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:32:11 crc kubenswrapper[4805]: I0216 22:32:11.599603 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:32:11 crc kubenswrapper[4805]: E0216 22:32:11.600625 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:32:17 crc kubenswrapper[4805]: E0216 22:32:17.600523 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:32:20 crc kubenswrapper[4805]: E0216 22:32:20.602139 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:32:25 crc kubenswrapper[4805]: I0216 22:32:25.598043 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:32:25 crc kubenswrapper[4805]: E0216 22:32:25.599192 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:32:29 crc kubenswrapper[4805]: E0216 22:32:29.601393 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:32:35 crc kubenswrapper[4805]: E0216 22:32:35.600810 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:32:37 crc kubenswrapper[4805]: I0216 22:32:37.598626 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:32:37 crc kubenswrapper[4805]: E0216 22:32:37.599321 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:32:44 crc kubenswrapper[4805]: I0216 22:32:44.605855 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:32:44 crc kubenswrapper[4805]: E0216 22:32:44.727147 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:32:44 crc kubenswrapper[4805]: E0216 22:32:44.727256 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:32:44 crc kubenswrapper[4805]: E0216 22:32:44.728198 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl89q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-m2jhm_openstack(f1a75265-a8ae-4b0a-9719-085d3361edb7): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:32:44 crc kubenswrapper[4805]: E0216 22:32:44.729545 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:32:46 crc kubenswrapper[4805]: E0216 22:32:46.601446 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:32:51 crc kubenswrapper[4805]: I0216 22:32:51.599165 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:32:51 crc kubenswrapper[4805]: E0216 22:32:51.600383 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:32:58 crc kubenswrapper[4805]: E0216 22:32:58.601500 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:33:01 crc kubenswrapper[4805]: E0216 22:33:01.691095 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:33:01 crc kubenswrapper[4805]: E0216 22:33:01.691746 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:33:01 crc kubenswrapper[4805]: E0216 22:33:01.691898 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hcch67ch7ch5b8h5f9h567hf7h679h4hc7hb4h79hc4hb4h64ch57bh668h689h59bh9fh647hcfh545h568hb8hc8h549h65fh697h95h699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vpz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2bbe998-2ee6-4b84-b723-42b1c4381ebc): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:33:01 crc kubenswrapper[4805]: E0216 22:33:01.693041 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:33:02 crc kubenswrapper[4805]: I0216 22:33:02.597900 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:33:02 crc kubenswrapper[4805]: E0216 22:33:02.599196 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:33:11 crc kubenswrapper[4805]: E0216 22:33:11.605180 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:33:13 crc kubenswrapper[4805]: E0216 22:33:13.615336 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:33:15 crc kubenswrapper[4805]: I0216 22:33:15.599372 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:33:15 crc kubenswrapper[4805]: E0216 22:33:15.601261 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:33:23 crc kubenswrapper[4805]: E0216 22:33:23.611498 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:33:27 crc kubenswrapper[4805]: I0216 22:33:27.599445 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:33:27 crc kubenswrapper[4805]: E0216 22:33:27.602179 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:33:27 crc kubenswrapper[4805]: E0216 22:33:27.602549 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:33:36 crc kubenswrapper[4805]: E0216 22:33:36.600808 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:33:38 crc kubenswrapper[4805]: E0216 22:33:38.600382 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:33:39 crc kubenswrapper[4805]: I0216 22:33:39.254020 4805 scope.go:117] "RemoveContainer" containerID="667ae61cc636e79c702d097523853497e64fd2f8b15d10d3d242aa660c957727" Feb 16 22:33:39 crc kubenswrapper[4805]: I0216 22:33:39.275414 4805 scope.go:117] "RemoveContainer" containerID="08196a677cecdfcc4ab66a6b8cf8f1707a4c9d43f9c4663a1e7ac6fb7d73fade" Feb 16 22:33:42 crc kubenswrapper[4805]: I0216 22:33:42.598811 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:33:42 crc kubenswrapper[4805]: E0216 22:33:42.599903 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:33:50 crc kubenswrapper[4805]: E0216 22:33:50.600206 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.432714 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cwbln"] Feb 16 22:33:52 crc kubenswrapper[4805]: E0216 22:33:52.433446 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aac7e29-d7ab-4202-aece-ec4d1666bdb3" containerName="collect-profiles" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.433457 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aac7e29-d7ab-4202-aece-ec4d1666bdb3" containerName="collect-profiles" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.433679 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aac7e29-d7ab-4202-aece-ec4d1666bdb3" containerName="collect-profiles" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.435332 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.456238 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwbln"] Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.509914 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wtg\" (UniqueName: \"kubernetes.io/projected/97f325c1-ad0a-4eb1-a542-00a254191290-kube-api-access-j9wtg\") pod \"redhat-operators-cwbln\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.510054 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-utilities\") pod \"redhat-operators-cwbln\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.510135 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-catalog-content\") pod \"redhat-operators-cwbln\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: E0216 22:33:52.609096 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.617412 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-catalog-content\") pod \"redhat-operators-cwbln\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.617534 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9wtg\" (UniqueName: \"kubernetes.io/projected/97f325c1-ad0a-4eb1-a542-00a254191290-kube-api-access-j9wtg\") pod \"redhat-operators-cwbln\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.617628 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-utilities\") pod \"redhat-operators-cwbln\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.618197 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-utilities\") pod \"redhat-operators-cwbln\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.628021 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-catalog-content\") pod \"redhat-operators-cwbln\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.665030 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9wtg\" (UniqueName: \"kubernetes.io/projected/97f325c1-ad0a-4eb1-a542-00a254191290-kube-api-access-j9wtg\") pod \"redhat-operators-cwbln\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:52 crc kubenswrapper[4805]: I0216 22:33:52.756443 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:33:53 crc kubenswrapper[4805]: I0216 22:33:53.256541 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwbln"] Feb 16 22:33:53 crc kubenswrapper[4805]: I0216 22:33:53.948114 4805 generic.go:334] "Generic (PLEG): container finished" podID="97f325c1-ad0a-4eb1-a542-00a254191290" containerID="e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5" exitCode=0 Feb 16 22:33:53 crc kubenswrapper[4805]: I0216 22:33:53.948430 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwbln" event={"ID":"97f325c1-ad0a-4eb1-a542-00a254191290","Type":"ContainerDied","Data":"e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5"} Feb 16 22:33:53 crc kubenswrapper[4805]: I0216 22:33:53.948636 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwbln" event={"ID":"97f325c1-ad0a-4eb1-a542-00a254191290","Type":"ContainerStarted","Data":"3cf5e09f6e4cdd1057df7fc17ca9611f6b74b83f91cc2f5c8e16e92dda8deb32"} Feb 16 22:33:55 crc kubenswrapper[4805]: I0216 22:33:55.600605 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:33:55 crc kubenswrapper[4805]: E0216 22:33:55.601511 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:33:55 crc kubenswrapper[4805]: I0216 22:33:55.972210 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwbln" event={"ID":"97f325c1-ad0a-4eb1-a542-00a254191290","Type":"ContainerStarted","Data":"962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1"} Feb 16 22:34:00 crc kubenswrapper[4805]: I0216 22:34:00.042561 4805 generic.go:334] "Generic (PLEG): container finished" podID="97f325c1-ad0a-4eb1-a542-00a254191290" containerID="962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1" exitCode=0 Feb 16 22:34:00 crc kubenswrapper[4805]: I0216 22:34:00.042667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwbln" event={"ID":"97f325c1-ad0a-4eb1-a542-00a254191290","Type":"ContainerDied","Data":"962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1"} Feb 16 22:34:01 crc kubenswrapper[4805]: I0216 22:34:01.062741 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwbln" event={"ID":"97f325c1-ad0a-4eb1-a542-00a254191290","Type":"ContainerStarted","Data":"e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515"} Feb 16 22:34:01 crc kubenswrapper[4805]: I0216 22:34:01.093389 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cwbln" podStartSLOduration=2.604748743 podStartE2EDuration="9.093047747s" podCreationTimestamp="2026-02-16 22:33:52 +0000 UTC" firstStartedPulling="2026-02-16 22:33:53.95107555 +0000 UTC m=+5851.769758845" lastFinishedPulling="2026-02-16 22:34:00.439374544 +0000 UTC m=+5858.258057849" observedRunningTime="2026-02-16 22:34:01.084364134 +0000 UTC m=+5858.903047429" watchObservedRunningTime="2026-02-16 22:34:01.093047747 +0000 UTC m=+5858.911731052" Feb 16 22:34:02 crc kubenswrapper[4805]: I0216 22:34:02.756580 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:34:02 crc kubenswrapper[4805]: I0216 22:34:02.756867 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:34:03 crc kubenswrapper[4805]: E0216 22:34:03.601640 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:34:03 crc kubenswrapper[4805]: E0216 22:34:03.621081 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:34:03 crc kubenswrapper[4805]: I0216 22:34:03.831199 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwbln" podUID="97f325c1-ad0a-4eb1-a542-00a254191290" containerName="registry-server" probeResult="failure" output=< Feb 16 22:34:03 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 22:34:03 crc kubenswrapper[4805]: > Feb 16 22:34:09 crc kubenswrapper[4805]: I0216 22:34:09.598886 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:34:09 crc kubenswrapper[4805]: E0216 22:34:09.600068 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:34:13 crc kubenswrapper[4805]: I0216 22:34:13.806788 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwbln" podUID="97f325c1-ad0a-4eb1-a542-00a254191290" containerName="registry-server" probeResult="failure" output=< Feb 16 22:34:13 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 16 22:34:13 crc kubenswrapper[4805]: > Feb 16 22:34:15 crc kubenswrapper[4805]: E0216 22:34:15.602228 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:34:19 crc kubenswrapper[4805]: E0216 22:34:19.607963 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:34:21 crc kubenswrapper[4805]: I0216 22:34:21.597930 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:34:21 crc kubenswrapper[4805]: E0216 22:34:21.598538 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:34:22 crc kubenswrapper[4805]: I0216 22:34:22.808367 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:34:22 crc kubenswrapper[4805]: I0216 22:34:22.855069 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:34:23 crc kubenswrapper[4805]: I0216 22:34:23.658278 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwbln"] Feb 16 22:34:24 crc kubenswrapper[4805]: I0216 22:34:24.307994 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cwbln" podUID="97f325c1-ad0a-4eb1-a542-00a254191290" containerName="registry-server" containerID="cri-o://e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515" gracePeriod=2 Feb 16 22:34:24 crc kubenswrapper[4805]: I0216 22:34:24.872107 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:34:24 crc kubenswrapper[4805]: I0216 22:34:24.977571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9wtg\" (UniqueName: \"kubernetes.io/projected/97f325c1-ad0a-4eb1-a542-00a254191290-kube-api-access-j9wtg\") pod \"97f325c1-ad0a-4eb1-a542-00a254191290\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " Feb 16 22:34:24 crc kubenswrapper[4805]: I0216 22:34:24.977774 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-utilities\") pod \"97f325c1-ad0a-4eb1-a542-00a254191290\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " Feb 16 22:34:24 crc kubenswrapper[4805]: I0216 22:34:24.977920 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-catalog-content\") pod \"97f325c1-ad0a-4eb1-a542-00a254191290\" (UID: \"97f325c1-ad0a-4eb1-a542-00a254191290\") " Feb 16 22:34:24 crc kubenswrapper[4805]: I0216 22:34:24.982790 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-utilities" (OuterVolumeSpecName: "utilities") pod "97f325c1-ad0a-4eb1-a542-00a254191290" (UID: "97f325c1-ad0a-4eb1-a542-00a254191290"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:34:24 crc kubenswrapper[4805]: I0216 22:34:24.989659 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f325c1-ad0a-4eb1-a542-00a254191290-kube-api-access-j9wtg" (OuterVolumeSpecName: "kube-api-access-j9wtg") pod "97f325c1-ad0a-4eb1-a542-00a254191290" (UID: "97f325c1-ad0a-4eb1-a542-00a254191290"). InnerVolumeSpecName "kube-api-access-j9wtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.080376 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9wtg\" (UniqueName: \"kubernetes.io/projected/97f325c1-ad0a-4eb1-a542-00a254191290-kube-api-access-j9wtg\") on node \"crc\" DevicePath \"\"" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.080420 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.121583 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97f325c1-ad0a-4eb1-a542-00a254191290" (UID: "97f325c1-ad0a-4eb1-a542-00a254191290"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.183488 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f325c1-ad0a-4eb1-a542-00a254191290-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.340894 4805 generic.go:334] "Generic (PLEG): container finished" podID="97f325c1-ad0a-4eb1-a542-00a254191290" containerID="e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515" exitCode=0 Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.340939 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwbln" event={"ID":"97f325c1-ad0a-4eb1-a542-00a254191290","Type":"ContainerDied","Data":"e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515"} Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.340966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwbln" event={"ID":"97f325c1-ad0a-4eb1-a542-00a254191290","Type":"ContainerDied","Data":"3cf5e09f6e4cdd1057df7fc17ca9611f6b74b83f91cc2f5c8e16e92dda8deb32"} Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.340982 4805 scope.go:117] "RemoveContainer" containerID="e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.341018 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwbln" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.366347 4805 scope.go:117] "RemoveContainer" containerID="962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.402302 4805 scope.go:117] "RemoveContainer" containerID="e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.405561 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwbln"] Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.416742 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cwbln"] Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.489525 4805 scope.go:117] "RemoveContainer" containerID="e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515" Feb 16 22:34:25 crc kubenswrapper[4805]: E0216 22:34:25.489894 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515\": container with ID starting with e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515 not found: ID does not exist" containerID="e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.489932 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515"} err="failed to get container status \"e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515\": rpc error: code = NotFound desc = could not find container \"e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515\": container with ID starting with e46480e38e63be42c43da886dafd09d1f6db4d4227d59cb147b001bf27ca6515 not found: ID does not exist" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.489954 4805 scope.go:117] "RemoveContainer" containerID="962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1" Feb 16 22:34:25 crc kubenswrapper[4805]: E0216 22:34:25.490286 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1\": container with ID starting with 962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1 not found: ID does not exist" containerID="962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.490322 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1"} err="failed to get container status \"962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1\": rpc error: code = NotFound desc = could not find container \"962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1\": container with ID starting with 962044a8aae286d262978c60dc2140bc2b375eda7802fa5d5b7fb2a6a57addb1 not found: ID does not exist" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.490340 4805 scope.go:117] "RemoveContainer" containerID="e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5" Feb 16 22:34:25 crc kubenswrapper[4805]: E0216 22:34:25.490674 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5\": container with ID starting with e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5 not found: ID does not exist" containerID="e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.490702 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5"} err="failed to get container status \"e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5\": rpc error: code = NotFound desc = could not find container \"e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5\": container with ID starting with e5e6dcd15f92366c4232aa39f82702ede9ba772118b9d3132b85f8e493b658a5 not found: ID does not exist" Feb 16 22:34:25 crc kubenswrapper[4805]: I0216 22:34:25.615288 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f325c1-ad0a-4eb1-a542-00a254191290" path="/var/lib/kubelet/pods/97f325c1-ad0a-4eb1-a542-00a254191290/volumes" Feb 16 22:34:29 crc kubenswrapper[4805]: E0216 22:34:29.602671 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:34:31 crc kubenswrapper[4805]: E0216 22:34:31.601931 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:34:35 crc kubenswrapper[4805]: I0216 22:34:35.599298 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:34:35 crc kubenswrapper[4805]: E0216 22:34:35.600487 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:34:39 crc kubenswrapper[4805]: I0216 22:34:39.390154 4805 scope.go:117] "RemoveContainer" containerID="dbfbc51313628031f9a8a9e0cd88c717611cee279962b7d3a2b868d785c526f7" Feb 16 22:34:41 crc kubenswrapper[4805]: E0216 22:34:41.602399 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:34:45 crc kubenswrapper[4805]: E0216 22:34:45.602641 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:34:47 crc kubenswrapper[4805]: I0216 22:34:47.598667 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:34:47 crc kubenswrapper[4805]: E0216 22:34:47.600394 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:34:56 crc kubenswrapper[4805]: E0216 22:34:56.601093 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:34:58 crc kubenswrapper[4805]: I0216 22:34:58.598267 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:34:58 crc kubenswrapper[4805]: E0216 22:34:58.598825 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq8qd_openshift-machine-config-operator(00c308fa-9d36-4fec-8717-6dbbe57523c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" podUID="00c308fa-9d36-4fec-8717-6dbbe57523c6" Feb 16 22:34:59 crc kubenswrapper[4805]: E0216 22:34:59.600982 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:35:09 crc kubenswrapper[4805]: I0216 22:35:09.598570 4805 scope.go:117] "RemoveContainer" containerID="b214f17fa15fa63142d0b80e30e9ab5ea8c5936ee79d52b5fa7ef25a45ef0535" Feb 16 22:35:09 crc kubenswrapper[4805]: I0216 22:35:09.843297 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq8qd" event={"ID":"00c308fa-9d36-4fec-8717-6dbbe57523c6","Type":"ContainerStarted","Data":"5777fe001a59caaa5937a97f04bf5cf94fcf6f39b269e34ba9e999c43ae3966b"} Feb 16 22:35:10 crc kubenswrapper[4805]: E0216 22:35:10.601273 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:35:14 crc kubenswrapper[4805]: E0216 22:35:14.599629 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7" Feb 16 22:35:23 crc kubenswrapper[4805]: E0216 22:35:23.626094 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f2bbe998-2ee6-4b84-b723-42b1c4381ebc" Feb 16 22:35:27 crc kubenswrapper[4805]: E0216 22:35:27.600160 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-m2jhm" podUID="f1a75265-a8ae-4b0a-9719-085d3361edb7"